Paul Ruler

My feedback

  1. 114 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    8 comments  ·  Azure Databricks  ·  Flag idea as inappropriate…  ·  Admin →
    Paul Ruler supported this idea  · 
    An error occurred while saving the comment
    Paul Ruler commented  · 

    +1 for the "allow Git integration with DevOps service other than the Databricks AD tenant" feature.

    We have also hit this issue whereby the Databricks services we are deploying and using are homed to a different AD tenant to our Azure DevOps Git repos.

  2. 94 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    13 comments  ·  SQL Managed Instance  ·  Flag idea as inappropriate…  ·  Admin →

    LTR (Long Term Retention) backup for Managed Instance is being worked on and should be expected in a preview in calendar year 2020. Updates will follow when the feature becomes available.

    Paul Ruler supported this idea  · 
  3. 14 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Azure Databricks » Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →

    Thanks for the valid suggestion. Your feedback is now open for the user community to upvote & comment on. This allows us to effectively prioritize your request against our existing feature backlog and also gives us insight into the potential impact of implementing the suggested feature.

    An error occurred while saving the comment
    Paul Ruler commented  · 

    Hello @Ron Bokleman,

    We have faced a similar issue when deploying Databricks into subscriptions tightly controlled by both resource group and resource naming policies that simply restricted the dynamic creation of the required Databricks resource groups and resources.

    If you are using the portal then you have no control over the resource group name it automatically creates, but if you deploy via an ARM template (which is what we do) you can take control and enforce the Databricks automatically created resource group to take on the name of your choosing by modifying the WorkspaceProperties/managedResourceGroupId value. This won't apply to the resources it creates inside that group by default and when clusters are running, I haven't found a way of controlling those names. We needed to get the customer to introduce exclusions for the dynamic resource creation within these now name controlled resource groups which has worked well so far.

    Example of our extract, our variable is simply a concat string to build up the Resource ID of this resource group to be created: -
    "ManagedResourceGroupId": "[variables('managedResourceGroupId')]",

    By including a tags section to the Microsoft.Databricks/workspaces to the resource you are provisioning, this automatic group will also apply the tag values you have specified and these also apply to the 3 default resources it creates inside this group – 1 x storage account, 1 x NSG, 1 x VNet along with its own system tag values. Again, you can only do this via an ARM template.

    I take your point about the automatic creation of resource groups without being able to control the names of those and that is a valid issue for Azure to work through, but I hope this helps you particularly with Databricks instance provisioning.

Feedback and Knowledge Base