Azure Databricks

Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. Designed with the founders of Apache Spark, Databricks is integrated with Azure to provide one-click setup, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.

We would love to hear any feedback you have for Azure Databricks.
For more details about Azure Databricks, try our documentation page.

  1. Databricks Service principal per workspace for specific KeyVault access

    Databricks currently accesses KeyVault from the control plane and uses the same AzureDatabricks Service principal for ALL databricks workspaces in the tennant.

    At present, if you create a secret scope in workspace A on KeyVault A and a new secret scope in workspace B on KeyVault B then the Azure databricks service principal will have access to both keyvaults. Therefore, providing you are privielaged enough to know the details (resource uri) of the keyvaults then you can create a scope from your own databricks workspace C and get access to all the keys!!

    It should be possible to specify an…

    23 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  2. Azure Databricks should have more granular level access permissions

    Currently, Azure Databricks Workspace provides only 4 options for access permissions.


    1. Workspace Access Control

    2. Cluster and Jobs Access Control

    3. Table Access Control

    4. Personal Access Tokens.

    These permissions give more access to user than requirement.

    Would it be possible to create more permissions under Access Control ?

    Specifically for below requirements

    Access to view data sources
    Access to view Databrick runs to check failures and their reasons
    Access to view data changes and deployment issues
    Access to troubleshoot data processing failures caused by Data issues, System errors in Databricks workspace

    20 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  3. STOP the non-sense of making Resource Groups for these services if you really want us to use them!! Completely annoying.

    Totally insane. Databricks is the WORST offender of this, but Network Watcher does it as well. I won't allow RGs to be created unless they are NAMED and TAGGED according to OUR rules, so people cannot use this service. Period.

    15 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →

    Thanks for the valid suggestion. Your feedback is now open for the user community to upvote & comment on. This allows us to effectively prioritize your request against our existing feature backlog and also gives us insight into the potential impact of implementing the suggested feature.

  4. Launching Databricks WorkSpace from Azure Portal

    In order to launch the databricks workspace, the user needs to be an owner /contributor at the databricks resource level in azure portal, which is annoying for any enterprise users who are planning to roll out to larger audiences.

    Providing the direct workspace backend URL to the end user manually is not the ideal way , Since there are few now and will be 100's in the future.

    Permissions are set at the workspace and cluster level, When a user launches the workspace from the azure portal , whatever the api that is calling the databricks should validate the existing…

    8 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  5. Cluster initialization time is too huge while databricks job run

    The simple job run even for a "print hello_world program" in databricks takes a minimum and fixed time lag of 10-12 seconds for spark initialization which is quite a significant latency. This time lag should be made as minimal as possible, there are certain other cloud providers like google etc. who are doing the same.

    6 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  6. Support changing the VNet Address space or subnet CIDR of an existing Azure Databricks workspace

    Modification of the CIDR is quite common, especially when a proof of concept (POC) is a success and you want to go further and connect it to a corporate network.

    RFC 1918 addresses are a real challenge to maintain, and when you perform a POC, you cannot quickly obtain a / 16 or / 24 for POC as requested by the Databricks virtual network injection function.

    For more information, I missed the URL below saying it was not supported and the impact I saw was that the spark cmdlet were no longer working (dubutil were).
    https://docs.microsoft.com/en-us/azure/databricks/kb/cloud/azure-vnet-jobs-not-progressing

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  7. Implement access token auto refresh when using credential passthrough

    When a cluster is configured with credential passthrough we are getting an access denied error after 1 hour of running a notebook due to the AD access token expiration. Because of that, it would be nice to have the access token auto refresh feature, with no need to an Azure Active Directory admin increase the AccessTokenLifetime for users.

    This feature is also cited in a comment here: https://feedback.azure.com/forums/909463-azure-databricks/suggestions/36879865-enable-azure-ad-credential-passthrough-to-adls-gen

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  8. Azure Diagnosticks logs are collected with up to 24 hour delay, alert cannot be used

    As the doc says :
    On any given day, Azure Databricks delivers at least 99% of diagnostic logs within the first 24 hours, and the remaining 1% in no more than 72 hours.
    Refer : https://docs.microsoft.com/en-us/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs#diagnostic-log-delivery

    In this case, if logs are sent to log analytcis, log search alert can not be used to monitior those logs due to the unpredictable delay . This has been posted by multiple customers, hope this can be enhanced

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  9. Add transport of all Cluster events to Log Analytics workspace

    Currently diagnostic setting for a workspace allows only limited number of events to be transported into Log Analytics DatabricksClusters category. Events that are missing ie:


    • cluster termination

    • cluster auto-sizing related event (adding machines, expanding disks and opposite)

    In general it would be more than welcome to have all of the information available in cluster Event log to be made available in Log Analytics as well

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  10. 2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  11. Timestamp for each command

    It would be very helpful to see the exact timestamps for when a command started and finished processing, not only the runtime in msec.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  12. Support for enqueuing job clusters onto an instance pool

    Using instance pools to optimize the runtime of smaller, automated jobs is currently at odds with the built-in scheduling system.

    This is because an instance pool will simply reject a job for which it can't immediately procure the required amount of nodes.

    This is a proposal to have an enqueuing behavior such that an automated job will instead wait (possibly with a configurable upper time limit) for resources to become available. The requirement would then be that either the minimum autoscaling configuration or the fixed number of nodes would be satisified at the time of job start.

    Having this functionality…

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  13. Show ephemeral job on portal

    Currently UI only shows job which were created from portal or REST api /create endpoint. We have scenario where we would create job on the fly using /api/runsubmit and it doesnt show on Job section of portal. We can only see details of specific run if we directly go to endpoint but not centralized view.

    Is this something coming up or at-least on road map?

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  14. Multiple Cron schedules for one job

    Many simple schedules cannot be configured for a single job in Databricks due to the limitations of cron schedules. E.g. running a job every 40 minutes. Multiple schedules could provide such a frequency, but today, that would require having duplicate copies of the same job with different schedules.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  15. Enable Azure AD credential passthrough to ADLS Gen2 from PowerBI

    At present, the PowerBI connector uses token authentication. It would be ideal if this used AD auth, and that auth was passed down to the underlying source (Data lake gen2).

    This is currently only available within the workspace using High Concurrency clusters. But we would like non-technical users to use PowerBI

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  16. runtime versions

    Databricks needs to better test the compatibility between different runtime versions and the various packages.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  17. Disabling 'import library' options

    I want to disable all possible ways of installing libraries that covers init script, UI, REST api, and condo

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  18. Personal Access Token Security Improvements

    Most Databricks users end up needing to generate a Personal Access Token - which I am guessing is why Microsoft started to default that setting to ON.

    The problem is, from an Access Control perspective these tokens present a massive risk to any organization because there are no controls around them.

    These tokens allow direct access to everything the user has access to and all it takes to cause a major data breach is for one user to accidentally post one of these tokens on a public forum or GitHub.

    Here are a few specific issues:
    1. Even though conditional…

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Azure Databricks

Categories

Feedback and Knowledge Base