Update: Microsoft will be moving away from UserVoice sites on a product-by-product basis throughout the 2021 calendar year. We will leverage 1st party solutions for customer feedback. Learn more here.

Azure Databricks

Azure Databricks is an Apache Spark-based analytics platform optimized for the Microsoft Azure cloud services platform. Designed with the founders of Apache Spark, Databricks is integrated with Azure to provide one-click setup, streamlined workflows, and an interactive workspace that enables collaboration between data scientists, data engineers, and business analysts.

We would love to hear any feedback you have for Azure Databricks.
For more details about Azure Databricks, try our documentation page.

  1. Audit and Log Notebook Commands

    Due to compliance requirements, we need to log and audit which commands are executed by which user.

    Example. A user sets up a SQL notebook and runs the following command in a cell:

    select * from purchases where vendorid='abc'

    We need to log and to be able to audit that the user X has done the above query at time T.

    27 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  2. Support Databricks Git Integration with Azure DevOps linked to a different AD

    According to the official docs, in order to enable Git in Databricks:

    The Azure DevOps Services organization must be linked to the same Azure AD tenant as Databricks.

    This is extremely limiting as Databricks workspaces are often deployed under client AD tenants.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  3. Instance pools should have a hybrid mode between on-demand and spot

    Currently instance pools can either be set to all on-demand or all spot instances.

    I would like the ability to have my job clusters/interactive clusters be able to choose to have on-demand instances for driver nodes and a set number of workers and then have the remaining workers be spot instances.

    I cannot currently set my driver nodes to be on-demand and my worker nodes to be spot when accessing them from a pool.

    If I use an interactive cluster I can (must?) set my driver node to be on-demand and my worker nodes to be spot.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  4. Enable ADLS passthrough and table access control together.

    Currently, in high concurrency cluster with ADLS passthrough enabled, table access control can't be enabled at the same time. If database/tables are created on top of storage containers and files in storage containers have ACL/RBAC applied, it works fine and users can't access underlying data if RBAC/ACL on the data files don't allow the access. If a user tries to run a select query on a table with underlying data in a container the user doesn’t have access to, the select query returns an error as expected.

    Users can still view the tables and drop tables and databases not created…

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  5. JDK11 support on Databricks platform

    We are migrating all our Java applications to run on JVM 11 based platforms. So Databricks should have support for JDK11.

    The latest Databricks runtime 7.2 still supports only Java1.8 (Refer : https://docs.databricks.com/release-notes/runtime/7.2.html#system-environment ). We have tried installing JDK11 through init scripts while spawning the cluster, but we are seeing Cluster fails to start after that.

    Other Platforms like AWS EMR has already started supporting JDK11. Please provide the ETA and plans on this request.

    15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  6. Propogate tags to custom named storage accounts

    On storage accounts created by databricks deployments tag propogation works only if the storage account name starts with 'dbstorage'. The tags are not propogated to a storage account with a custom name that does not start with 'dbstorage'. In larger environments it is quite likely to have naming conventions, these naming conventions for storage accounts likely do not start with 'dbstorage'

    The propogation should work on any named storage account.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  7. Increase number of Secret Scope limitation per workspace

    Current Secret Scope limitation of 100 per workspace is very less if enterprise wants to leverage single workspace for multiple application teams and to isolate their Application secrets.

    5 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  8. Allow owner of a database to manage tables (alter/delete etc) that are not owned by them.

    I would like our Databricks environment be as self service as possible. Now that some users are transitioning, it is clear that only an administrator can drop a table or change owner for a user that is no longer here. I would like to be able to grant this privilege to the owner of the database. If I did not want the owner of a database to be able to delete/alter tables, I would have assigned one of our administrators.

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  9. Query Acceleration should support compressed data

    Query Acceleration currently supports csv and json files but not compressed versions of those files. It needs to support compressed data or it won't be usable for big data stores.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  10. Notebook Isolation to R packages

    We are having a separated environment per Notebook as done in Python would help, the notebook could load all the libraries installed in the cluster by default but if users want to install/update libraries it would stay in the notebook environment. In R notebooks it’s not the case unfortunately. Let´s say Installing a package can introduce breaking changes in the code so imagine your code is not compatible with the library A version 2.0 but your colleagues force the installation of this version in his notebook then your notebook won’t run any more because he installed the version 2.0 not…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  11. track the user who changes folder name

    It seems that Databricks can not track the user who changes the folder name.

    We need a feature in diagnostic logs/Activity logs to track this information

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  12. Enable RStudio to use Credential Passthrough for ADLS

    RStudio on Azure Databricks is not currently compatible with clusters with credential passthrough enabled - instead authentication must be managed using either access keys or a service principal.

    Enabling credential passthrough would allow users to use the gold-standard in ADLS authentication.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  13. House Moving Melbourne Business address: 1/11 Anthony drive Mount Waverley VIC, 3149 Australia Phone: 03 9059 1908

    Description:
    House Moving Melbourne is one the best furniture removals company in Melbourne. We are serving people from many years. We offer tailor-made secure storage and relocation services in Melbourne. Along with a dedicated and professional team of movers, we have numerous vans and trucks. Our specialized services include wrapping, packing, moving, storage and unpacking. We have 5000+ happy customers.
    Unlike traditional movers, we have everything for your assistance from the workforce to vehicles.
    https://housemovingmelbourne.com.au

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  14. Kindly give table schema download option

    In Data->Tables

    Kindly give table schema download option . Where we can have Columns and its datatypes in xls or csv

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  15. Azure Databricks logging, add notebook id to every command in the stderr

    Add in the notebook id and timestamp on the stderr log. Command ID is already inserted into the logs.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  16. Request an API to collect the information of how long the active cluster is running

    We need an API to collect the information of how long the active cluster is running.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  17. Tooltip not reflecting correctly

    While creating a Python job in databricks, the information in the box nearby Type and corresponding tooltip shows either to choose dfbs:/ or S3:/. Although it understandable but at first instance misleading as well.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  18. Quality Quincy Appliance Repair

    Business Name:
    Quality Quincy Appliance Repair
    Address:
    36 School St #105 Quincy, MA 02169
    Phone Number:
    617-829-9934
    URL:
    https://www.appliancerepairquincy.com
    Description:
    Quality Quincy Appliance Repair is proud to provide washing machine repair, dishwasher repair, refrigerator repair and oven repair in Quincy, Massachusetts. Our customers can rely on us to arrive on-time, find the issue with the appliance ASAP and have all of the parts on hand to fix it that same appointment.
    Social Media Accounts:
    https://quality-quincy-appliance-repair.business.site
    https://goo.gl/maps/mGjgxpDYSWk

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  19. Run History Page Error Log

    Display the type of error which failed the notebook on the Run History results page.

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
  20. Don't erase prints when returning an output message with dbutils.notebook.exit(msg)

    If we have a job that prints something, but at the end we use the command dbutils.notebook.exit("Hello World") (in Python) to return a value to whoever runs the job, the prints are erased. Please, keep the prints even if we use exit. This seems like a bug to me, I want to be able to read the logs anyway.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Strong Feedback  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4 5
  • Don't see your idea?

Azure Databricks

Categories

Feedback and Knowledge Base