@Nick While you are technically correct, unreliable installation and/or bugs within the tooling are beside the point.
Let me clarify my position. What I am advocating for is a consistent, streamlined, end-to-end developer experience. As it stands now, Azure Data Factory has been created as this isolated microcosm with it's own web-based authoring UX. Sure, you can link it up to a (Azure DevOps) Git repository, but even the suggested CI/CD process there is weird and unintuitive with a completely detached 'adf_publish' branch to where actual deployment artifacts are pushed. They've sort of reinvented the wheel by devising this totally unique flow which only applies to Data Factories. This rather myopic picture extends all the way to monitoring - granted one can jump through hoops in order to patch up some unified view by using Azure Monitor.
This would be totally fine if you really could develop, deploy, test and run data pipelines in isolation. But to my experience this is rarely the case. The way the tooling has been designed, perpetuates a model which advocates an army of specialized data engineers as a bottleneck in order to produce any meaningful value. As soon as you introduce any dependencies to the overall solution - such as a relational database, it becomes less obvious what the development process and the associated, preferably automated, deployment steps would look like. If somebody can point me to any non-trivial example, I'm happy to re-evaluate my position.
Until then, however, what I'm asking for is to reimagine the tooling to be aligned with the value stream of a working end-to-end solution instead of just focusing on the "data pipes" as an isolated problem. The fact that Data Factory has to be deployed as a monolith does not exactly help here either.
What I'm saying is the current incarnation of ADF v2 - and it not integrating properly with other development tooling such as Visual Studio - is just a symptom of a larger underlying issue.
I could also talk about the shortcomings of a visual authoring experience for creating and understanding any reasonably complex data pipelines - as well as code reusability, but that's a different topic altogether.
The DataOps story using Microsoft tooling is woefully inadequate without this. One should really be able to work on all the interrelated components within a data platform with unified tooling, commit into version control and deploy the changes into the appropriate environments e.g. with Azure DevOps. Having ADF support in Visual Studio is one piece of the puzzle and it seems that the teams developing various Azure services are not thinking about this holistically.
Now that the SSDT support for Azure SQL DW database projects is (finally) in preview, and ADF is the preferred Microsoft-provided tool for orchestrating data transformations, it should be fairly obvious that these things are closely related.
Sad that this is the top-voted feedback item for Data Factory and not even a comment from the team in 2 years...
93 votesunder review · 1 comment · API Management » Developer portal · Flag idea as inappropriate… · Admin →
Discussion here https://docs.microsoft.com/en-us/azure/api-management/api-management-howto-protect-backend-with-aad indicates that many are struggling with same exact issue. And surprisingly no response.
People are just using different terminology, such as: "headless integrations", "service-to-service calls", "API-to-API scenarios"... But it's the same problem.
I'm curious on how people have gone about this. It seems pretty obvious that Azure API Management in its current state does not exactly support exposing APIs which are meant to be consumed by daemon / server applications directly.
I'm talking about scenarios where the end-user is not involved and a simple two-legged Client Credentials Grant would suffice.
You can cobble something together, but the end result is not pretty. I think this is a major shortcoming.