Data Factory
Azure Data Factory allows you to manage the production of trusted information by offering an easy way to create, orchestrate, and monitor data pipelines over the Hadoop ecosystem using structured, semi-structures and unstructured data sources. You can connect to your on-premises SQL Server, Azure database, tables or blobs and create data pipelines that will process the data with Hive and Pig scripting, or custom C# processing. The service offers a holistic monitoring and management experience over these pipelines, including a view of their data production and data lineage down to the source systems. The outcome of Data Factory is the transformation of raw data assets into trusted information that can be shared broadly with BI and analytics tools.
Do you have an idea, suggestion or feedback based on your experience with Azure Data Factory? We’d love to hear your thoughts.
-
Support delete writeBehavior in DynamicsSink
I am unable to perform the delete operation in Dynamics 365 using a Copy Activity.
Using the same copy activity I am able to perform the create, update and read operations in Dynamics 365.1 vote -
Allow ORC to be used as an source/sink format in DataFlow
We currently cannot use ORC as a source/sink type in DataFlow jobs. This requires an extra copy in to Parquet format, which can cause issues due to not having as many data types as ORC does. Allowing ORC would remove the need to perform this extra copy operation that could potentially cause data type issues.
1 vote -
When Azure V2 preview will be GA?
When Azure V2 preview will be GA? After reading below preview policy we can not used it in production.
PREVIEWS ARE PROVIDED "AS-IS," "WITH ALL FAULTS," AND "AS AVAILABLE," AND ARE EXCLUDED FROM THE SERVICE LEVEL AGREEMENTS AND LIMITED WARRANTY. Previews may not be covered by customer support. Previews may be subject to reduced or different security, compliance and privacy commitments, as further explained in the Microsoft Online Services Privacy Statement, Microsoft Azure Trust Center, the Online Services Terms, and any additional notices provided with the Preview. Customers should not use Previews to process Personal Data or other data that…
1 vote -
Please include notification mail and enable / disable pipeline task similarly in SsIS
Please include notification mail and enable / disable pipeline task similarly in SsIS
1 vote -
Copy/pasting activities in the UI should not drop the "Name" property
Currently when you cut or copy/paste activites in the UI, the Name property is simply dropped and replaced by the default such as "GetMetaData1" name.
That is a big nuisance especially when copying multiple activities (such as a ForEach or If with contained activities.
Instead it should keep the existing name, and in case of duplicates, add as suffix or whatever
1 vote -
Hi, I am not able to see the parameter tab in COPY Activity in ADF now.
Hi, I want to load data incrementally from a server to another server using change version control. Please let me know how can i do that?
1 vote -
API to push data into ADF
support API calls to push data into Azure Data Factory from outside sources
1 vote -
Provide untar functionality in ADF
Currently ADF does not support Untar functionality.Number of SaaS vendors sends extract as Tar ball file.Need support for Untarring while sending data from source to sink.
1 vote -
Support integration with SAP HANA DB hosted in SAP cloud platform
Please confirm whether we enable ADF connectivity with SAP HANA DB hosted in SAP cloud platform
1 vote -
File name suffix should actually be extension
"File name suffix" should be labeled as "File name extension" actually.
The user will notice that it is the file extension only after the pipeline has run because if is configured as blank, the files will have no extension.
1 vote -
Need a simple "point at my windows folder or file" and then duplicate the folder structure and files in the destination.
easy way to get files from your computer to Azure
1 vote -
Get Metadata Activity Relational Based Store metadata only supported for Microsoft RDBMS's
Currently, the Get Metadata Activity only returns metadata columns for Microsoft RDBMS's including: SQL Server, Azure SQL Database, Azure SQL Database Managed Instance, Azure SQL Data Warehouse. Could this support be broadened to include other ANSI SQL RDBMS's for example: PostgreSQL, Oracle, MySql etc
This functionality is critical for detecting underlying changes to the source system that could cause the pipeline to fail.
1 vote -
Include Data Read and Data Written in Data Factory Views of Azure Data Factory Analytics
The current analytics only show activity runs at the summary level. It would be great to be able to drill into an activity and understand how much data was read from the source and written to the sink.
1 vote -
Azure Data Factory - Expose error information inside pipeline
In Azure Data Factory V2 pipelines, there is currently no way to programmatically inspect the last error, even within a "failure" flow. This is a serious oversight that impedes any ability for error handling, complex control flow, or even custom logging.
1 vote -
Alert for ADF Run Azure SQL Stored Procedure activity timeout
Allow alerts to be set on an ADF Run Stored Procedure activity time out. Currently these set the activity status to 'Timed Out' which cannot be selected from the Azure Alert status window.
1 vote -
Pipleline Name with parameters
Pipleline Name with parameters
In our current ADF pipeline job, a single pipeline calls 160+ table loads as multiple instances with help of look up activity.We need the table names in ADF monitoring, currently we just can see the pipeline name.
Having it displayed along with the pipeline name will enable easy monitoring.
0 votes -
Changing column names in Data Flow sometimes leads to errors in expressions
Changing a column name in an earlier node of the Data Flow will make the subsequent nodes automatically change this name as well. However if the column name is changed into a value containing a space or dash (which it not was), the new name will not contain the curly brackets column indicator in expressions. This leads to errors.
For example changing the column name test1 to test-1 becomes test-1 in the expression and it should have become {test-1}. Otherwise it is read as "column test minus 1".
0 votes -
Fault Tolerance Update - Check for valid file type in pipeline activity
When reading in files from a directory the pipeline will fail if a JSON document is in an invalid format. The Fault Tolerance settings do not allow for the pipeline to continue no matter what the setting.
The Fault Tolerance should Skip Incompatible Rows as listed instead of failing the activity.1 vote -
ADFv2 - Date function need addmonths and addyears
Date functions in ADF v2 supports addminutes and addhours - we would like addmonths and addyears please. These little things make huge difference during implementation.
0 votes -
Text format escape char only if needed or per field
When generating a CSV file string fields can potentially contain column or line delimiters, to avoid this two approaches can be followed: 1 Escape char (not csv standard/widespread, excel doesn't even handle them well), 2 Quote char. With second option all string fields are "quoted" and that is quite unefficient and worsens the more columns you have. If would be better if it would work like some editors which quote only when needed (a column or line delimiter is actually in the field)
0 votes
- Don't see your idea?