Allow static value columns additional to columns available in source files
We have a requirement to delete the existing data in the SQL Azure based on some criteria. Since we dont have a way of assigning any global variable/parameter and passing this value across activities.
We have different folders to pick up data from. Both folders will never have files at the same time. The data flow and transformation of data is same but for the same kind of work, we need to execute separate data flows (multiple datasets and pipelines/activities).
How about allowing to define a static value for a column in Dataset/Pipeline.
Folder 1 data flow -> if files exists in folder 1, then during copy activity, we can define a column containing flag that the existing data must be deleted (DeleteFlag=1).
Folder 2 data flow -> if files exists in folder 2, then during copy acivity, the DeleteFlag=0.
So other than defined columns of File, this additional column will be appended.
Tanya Smith commented
As a developer, this is a very similar need to :https://feedback.azure.com/forums/270578-data-factory/suggestions/17332714-allow-adding-of-additional-columns-to-data-source
Need to be able to assign a variable to a column, otherwise many more sprocs and maintenance required. This is basic functionality...
Marc Sanders commented
I would really appreciate it if it were possible to specify a hardcoded value, of a variable, or a parameter is a value for a target field
Exactly! That would be convenient if we would be able to add a constant column or a column which accepts variables from previous steps while we are copying data to the destination.
Jason Horner commented
you can kind of work around this by dynamically creating a select statement with a sql source but I agree it would be better to just be able to enrich datasets with values similar to how the user properties infrastructure works.
Jeffrey Hebrank commented
yes! i was looking for this feature today. I really want to be able to insert the datekey and the file name that is being ingested into the same table that the file data is being inserted into.
Shannon Lowder commented
It would be nice to be able to add dynamic columns too! During extracts, we want the InsertedDate and an ID of the process that moved the data. Both of these values are available in Azure Data Factory, but there is no way to add these columns alongside our mapped columns.
JASON RICKE commented
I would also add the ability to assign a variable, either a parameter of the pipeline or output from a lookup activity, to a column.
We create an Extraction record using the Lookup activity and a stored procedure. We then use the ExtractionId from the output of the Lookup activity as a batch identifier in our staging tables. Without the ability to assign this variable to a column in the Copy activities source, we are forced to use stored procedures in the copy activity which requires more SQL objects and thus more maintenance.