Azure Databricks

Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. Designed with the founders of Apache Spark, Databricks is integrated with Azure to provide one-click setup, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.

We would love to hear any feedback you have for Azure Databricks.
For more details about Azure Databricks, try our documentation page.

  1. Databricks Service principal per workspace for specific KeyVault access

    Databricks currently accesses KeyVault from the control plane and uses the same AzureDatabricks Service principal for ALL databricks workspaces in the tennant.

    At present, if you create a secret scope in workspace A on KeyVault A and a new secret scope in workspace B on KeyVault B then the Azure databricks service principal will have access to both keyvaults. Therefore, providing you are privielaged enough to know the details (resource uri) of the keyvaults then you can create a scope from your own databricks workspace C and get access to all the keys!!

    It should be possible to specify an…

    21 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  2. Show the rowcount for rows returned in Notebooks

    The notebook shows the data returned but not the number of rows - otherwise I have to rerun the command with a select count(*) on it.

    This should be easy to implement

    6 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  3. Support Single-Sign On with custom identity providers

    Databricks on AWS already supports multiple identity providers for SSO. Check https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html.

    There is no reason why Azure Databricks should be limited only to AAD for SSO.

    5 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  4. Enable Azure AD credential passthrough to ADLS Gen2

    Add a feature of passing AAD credential of the user working with Azure Databricks cluster to Azure Data Lake Store Gen2 filesystems to build secure and enterprise data lake analytics on top of ADLS Gen2 with Databricks. This feature should not be limited to the high concurrency clusters, since these clusters do not support many features (including Scala), and because a typical advanced analytics scenario for the enterprise is to run a dedicated cluster for a small group of departmental analysts (Standard clusters are the most popular).

    82 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Flag idea as inappropriate…  ·  Admin →
  5. Azure Databricks should have more granular level access permissions

    Currently, Azure Databricks Workspace provides only 4 options for access permissions.


    1. Workspace Access Control

    2. Cluster and Jobs Access Control

    3. Table Access Control

    4. Personal Access Tokens.

    These permissions give more access to user than requirement.

    Would it be possible to create more permissions under Access Control ?

    Specifically for below requirements

    Access to view data sources
    Access to view Databrick runs to check failures and their reasons
    Access to view data changes and deployment issues
    Access to troubleshoot data processing failures caused by Data issues, System errors in Databricks workspace

    20 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  6. Support changing the VNet Address space or subnet CIDR of an existing Azure Databricks workspace

    Modification of the CIDR is quite common, especially when a proof of concept (POC) is a success and you want to go further and connect it to a corporate network.

    RFC 1918 addresses are a real challenge to maintain, and when you perform a POC, you cannot quickly obtain a / 16 or / 24 for POC as requested by the Databricks virtual network injection function.

    For more information, I missed the URL below saying it was not supported and the impact I saw was that the spark cmdlet were no longer working (dubutil were).
    https://docs.microsoft.com/en-us/azure/databricks/kb/cloud/azure-vnet-jobs-not-progressing

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  7. Azure Databricks should be FedRamp compliant across all US regions

    Azure Databricks seems to have every compliance certification other than FedRamp. I would think it would not be an issue to become FedRamp compliant, which would allow government data to be transformed in the platform.

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  8. Implement access token auto refresh when using credential passthrough

    When a cluster is configured with credential passthrough we are getting an access denied error after 1 hour of running a notebook due to the AD access token expiration. Because of that, it would be nice to have the access token auto refresh feature, with no need to an Azure Active Directory admin increase the AccessTokenLifetime for users.

    This feature is also cited in a comment here: https://feedback.azure.com/forums/909463-azure-databricks/suggestions/36879865-enable-azure-ad-credential-passthrough-to-adls-gen

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  9. Workspace, cluster, job ownership changes

    It is unfortunate situation where there is no possibility to change ownership of:
    * workspace
    * cluster
    * job
    People are leaving companies, all of the above reside as owned by them. I understand that some of the issues regarding individuals owning resources can be avoided with SP, but it can be a solution for new deployments and still can be inconvenient in some use cases. There should be an option to change ownership. Lack of this feature also may produce an issue for operation of clusters and jobs:
    * cluster whose owner is removed from workspace will not be…

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  10. Cluster status API - check if cluster is up and running

    Currently there is no way to use API to verify if a given cluster is running or not. It causes a lot of trouble for tools like PowerBI to verify upfront if queries can run or not. There is such information available on the GUI, so we would gladly welcome same functionality to be with the REST API.

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  11. Add transport of all Cluster events to Log Analytics workspace

    Currently diagnostic setting for a workspace allows only limited number of events to be transported into Log Analytics DatabricksClusters category. Events that are missing ie:


    • cluster termination

    • cluster auto-sizing related event (adding machines, expanding disks and opposite)

    In general it would be more than welcome to have all of the information available in cluster Event log to be made available in Log Analytics as well

    2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  12. Azure Diagnosticks logs are collected with up to 24 hour delay, alert cannot be used

    As the doc says :
    On any given day, Azure Databricks delivers at least 99% of diagnostic logs within the first 24 hours, and the remaining 1% in no more than 72 hours.
    Refer : https://docs.microsoft.com/en-us/azure/databricks/administration-guide/account-settings/azure-diagnostic-logs#diagnostic-log-delivery

    In this case, if logs are sent to log analytcis, log search alert can not be used to monitior those logs due to the unpredictable delay . This has been posted by multiple customers, hope this can be enhanced

    4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  13. 2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  14. Expose API key during ARM deployment

    We have a CD/CI pipeline set up for our analytics platform deployment (Datalake, ADF, Databricks, ...). Some of the settings are written directly to a KeyVault so they can be referenced later on by e.g. ADF Linked Services.
    However, for Azure Databricks there is always a manual step: A user needs to log in an create an API key in the UI.
    It would be great if the ARM template could return a temporary Databricks API key (e.g. valid only for 24h) which would allow us to automate everything (e.g. content deployment, cluster creation, ...) via the Databricks API

    169 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    16 comments  ·  Flag idea as inappropriate…  ·  Admin →
  15. High availability for driver nodes

    Currently Azure Databricks clusters contain a single driver node, which creates a single point of failure should a process fail. This can causes clusters to become unresponsive during jobs -- affecting streaming jobs greatly.

    I would propose a second driver node be made available (when desired) to support automatic failover (HA) should a driver becoming unresponsive.

    8 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Flag idea as inappropriate…  ·  Admin →
  16. Please add feature that can use Table access control when use "R"

    At now, we can use Table access control with python and SQL only .
    So, please add feature that can use Table access control when use "R".

    5 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  17. Let Notebook attach n run to a terminated clusters while another cluster is running

    I connect my notebook to a cluster that is currently running and run some commands. Then I realize that I connected to the wrong cluster. So I attach it to another cluster that is currently shutdown and then run the command again. I expect that the shutdown cluster will auto start and will run my command but instead it tells me that there is another cluster that is currently running and the newly attached cluster is shutdown. It asks if it should attach to the active cluster and run the command there. The two options for this question is "Cancel"…

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  18. Multiple Cron schedules for one job

    Many simple schedules cannot be configured for a single job in Databricks due to the limitations of cron schedules. E.g. running a job every 40 minutes. Multiple schedules could provide such a frequency, but today, that would require having duplicate copies of the same job with different schedules.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  19. Moving multiple cells up/down together

    In jupyter notebook, you are able to select multiple cells and easily move them up/down together. This functionality is not possible right now in databricks.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Flag idea as inappropriate…  ·  Admin →
  20. Enable Azure AD credential passthrough to ADLS Gen2 from PowerBI

    At present, the PowerBI connector uses token authentication. It would be ideal if this used AD auth, and that auth was passed down to the underlying source (Data lake gen2).

    This is currently only available within the workspace using High Concurrency clusters. But we would like non-technical users to use PowerBI

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4 5
  • Don't see your idea?

Azure Databricks

Categories

Feedback and Knowledge Base