Enable using local filesystem for Linked Templates
Allow an ARM template to reference a relative file path on the local file system for accessing Linked Templates. It seems absurd that this isn't already available. We shouldn't be forced to put our templates in a publicly available resource to retrieve them and use them in other templates.
Jannik Buschke commented
This is just ridiculous to be honest. Developer experience with ARM templates is really mediocre.
I stopped waiting and just started using terraform. No issue there and it's even more versatile.
I made a powershell tool for this while we wait. 🙂
Brooks Vaughn commented
How about a PowerShell CmdLet that takes the parameters.json, templates.json and uses the ContentLink that points to a local file and produces a combined hashtable object suitable as a TemplateObject for use with Test-AzResourceGroupDeployment and New-AzResourceGroupDeployment -TemplateObject parameter? The CmdLet should have a -AsJson switch should you choose to save as a combined json template file.
This way we can use, test, and debug building complex ARM templates that involve using ContentLinks quickly since we won't have to waste time check-in changes, publish or upload, and including code to generate SAS Tokens just to test a deployment.
Debugging ARM templates during development is a real pain when they include ContentLinks.
Please address this deficiency.
Niek Maarse commented
I would also like this. It would be great if when deploying an arm template you can also upload the linked templates with it. The arm template itself is uploaded to azure anyway so uploading the linked templates before execution would be great. doing this you can then source your template from you relative path on your local machine or with azure devops without having the hassle of making it available via an url.
The current way of referencing an url, which should be available to the azure resource manager is quite cumbersum. When using azure devops extra steps like an extra storage account need to be created first, which you create using an arm resource task. Then you need to have a copy step which copies the template you want to link to. And then you need to have a step which executes your arm template which calls the linked template.
If you want to experiment with the linked template you need to push your local linked template to the url which is used by your main template which is again cumbersome.
I was hoping to get this to work by means of DevOps API's raw file access and Personal Access Token. However; one cannot currently inject anything into the URL request header - hence the approach described here does not work:
Could adding an optional AuthenticationHeaderValue to the templateLink property be a way to go forward? Such any private DevOps repo file and branch could be pulled into the deployment engine.
Not from a local file system but it would remove the requirement to first copy the code from the repo to a blob storage and prepare a SAS.
This is necessary in large part because Nested Templates don't support some important features of External/Linked Templates; e.g. their own parameters, references, resource group targeting, etc. For example, if we desire services deployed in two resource groups to be connected to the same Virtual Network, one or both subnets must be deployed by Linked Template. If nested templates were full-featured, I would be able to convert modular templates into monolithic ones. Also, the AZ CLI would be able to automatically do that for me, effectively providing the capability requested here.
Florian Scheepstra commented
Some sort of a solution would be actually required when using templateLinks in ARM files which you have under sourcecontrol and push out via the Azure DevOps pipelines.
Currently it is required to use a StorageAccount but if we could circumvent this need in the pipeline it would make the whole deployment much easier and less fragile, perhaps even more secure.
One possible solution is to merge the templates before pushing it to Azure. This could even be a new specific Task so the current Resource Manager task wouldn't need any changes...
Are you sure that the recommendation is to publish the nested templates on a public Git Repository ??? With all my respect, please be serious.
Bon Franklin commented
I think this simply isn't possible. When the ARM API processes a linked template element it happens in the Azure datacenter. And it can't then call back to your machine and ask for another local file, because it is simply a server API, not a client/server application that prescribes behavior of the client. Each ARM template deployment is a request and a response. And the request sends the entire template. The server responds with metadata to identify that deployment, so in theory the system could be configured such that you could use that identifier and make a second call to pass the linked template so that the server execution of the ARM template is able to access it, waiting until it arrives.
The only way to ensure that the linked template is private is to use a one time disposable URL or an Azure Storage SAS token with an expiration date in a reasonable time to ensure it is still available by the time your linked template gets called.
Sorry everyone, but I just don't think this can happen.
Katrin Shechtman commented
Please, please, please - it is one of the most important missing features along with generating passwords!
Alexander Batishchev commented
The most needed feature in the world of linked/nested templates!!
Seconds that as local file support has its advantage over online repo as there maybe a need of retaining json files on a local machine due to compliance and security requirement. Have tested suggestion from Michael however it's more applicable for devtest environment not local machine. Would be delighted to see this error message go away #Azure
The provided content link 'file:///E:/ps/arm/helloworld.json' is invalid or not supported. Content link must be an absolute URI not referencing local host or UNC path.
Only being able to reference a URL for the templatelink means I have to check in my code (template) changes *before* I can test them, which is untenable.
I'm sure it's difficult to achieve support for local files but it would be a significant improvement.
Couldn't agree more with this. While it is not specifically accurate that we are forced to put them in a publicly accessible location, the process for making that location private yet accessible (SAS tokens, etc.) is another complicated layer. Generate tokens, etc. where to keep these? can't check them in, so we need code to reference a key vault to get the token to reference the private storage to reach the templates.... yuck. I have to now design/build an entire system just to read the templates.
My templates are checked into source control, and I'm editing them right on my box. Let me run them from there. I can already run a single template from there, so there is no access/other security issue that should be blocking this.
If I want to later go through the effort to move everything to a cloud-based storage location, that should be up to me.
Torben Knerr commented
support for file:///c:/foo/template.json URI please! :)
Qiang Li commented
It will be a great feature if we can also package all required files/resources (e.g. in an archive format) so the template is self-contained.
Michael, that really isn't referencing a local template from a local template like Alex Marshall is suggesting though.
This already is available. At least, I'm able to reference my linked templates locally. Or, more accurately, I utilize the _artifactslocation variable to utilize a storage acct that only I have access to to host the templates while I deploy.