Storage

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Render JS (browser & typescript for Angular) SDK available for either Azure Storage or Azure CosmosDB

    There is no proper browser JS SDK available for Azure Storage Tables.

    While other SDKs have had their Azure Storage Tables API moves to the Azure CosmosDB SDK, the current browser JS SDK effectively supports neither as it has been phased out of the Azure Storage SDK yet the Azure CosmosDB SDK only support the SQL API. (as seen here: https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-cosmos/3.9.1/index.html)

    Please render de browser JS SDK for Azure Storage Tables available.

    Preferably as part of the Azure Storage SDK or through the Azure CosmosDB SDK.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  2. Taking azure StorageTable backup without overriding Timestamp values

    While taking azure StorageTable backup the Timestamp values is getting overwritten by the current date.

    We'd like the backup table to be the exact copy of the original table so that we can point our application to the backup table and have no side effects.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  3. Request for a function to automatically organize the diagnostic logs of virtual machines within a certain period

    A diagnostic log of a virtual machine is permanently created in the Azure table storage, but in order to avoid unnecessary increase in data size and the like, we want a function that automatically deletes data older than a certain period periodically.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  4. I want to use Azure Table at "Performance level: Premium"

    Currently, Azure Table is only available in "Performance Level: Standard". At the current performance level, there is a delay in searching, etc., so we want to use it at "Performance level: Premium", so please implement it as soon as possible.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  5. Allow Azure AD authentication froman an application to manage Azure Table Storage

    Currently only Queues and Blobs support this type of login. Authentication for our automations can't be aligned since tables do not support this type of login. See:
    https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app

    20 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  6. Editing Filters in SSMS Object Explorer doesn't Refresh the Filtered Results

    As an example:


    1. In SSMS Object Explorer, right-click on Tables and select Filter/Filter Settings

    2. Add some filter settings and click OK

    If there we no pre-existing filter settings then the filter is applied and the Object Explorer is refreshed. However if there were existing settings clicking OK will do nothing and the user has to manually refresh the Object Explorer.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  7. Memory-Optimized off-row storage issue

    I have a problem because of data row-size of some tables greater than 8060bytes, so there was some off-row storage in internal xtp DMVs and those tables allocated a memory about 120GB.
    I alter that tables and reduce row-size to less than 8060bytes, the memory allocated to those tables reduce and the allocated memory to objectid=0 is being grow in sys.dmdbxtptablememorystats. I think The garbage collector thread cant find this orphaned object , thus it cant deallocate memory.
    another issue is 'Memory Allocated To Memory Optimized Objects' is 25GB where 'Memory Used By Memory…

    122 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  8. Table storage on persistent memory devices.

    This could enable simple data modeling even for relational data.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  9. Localtime logging support in table storage that is used from Azure Functions

    Azure Functions automatically creates a storage account for its service to store logging data in form of "AzureWebJobsHostLogsYYYYMM" in its Table Storage.

    I know that UTC is widely adopted in Azure and Edm.DateTime, which is used in 'Timestamp', 'EndTime' and 'StartTime' is only for storing UTC datetime data.

    but I'd appriciate if if you add a feature that allows us to check these recorded logs of Azure Functions in localtime specified like JST, PST.

    Thanks,

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  10. Table Strorage JAVA SDK Performance Tuning

    Table Storage JAVA SDK does not provide concurrency option for any of its operation. We are getting TPS of around 500 for table insert queries but it is also because we have designed our data model such that.
    Async functionality is available for Table Storage C sharp SDK but not for JAVA SDK.
    It would be good if there is concurrency option for Table Storage Java SDK.

    3 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  11. Query tables using ODataQuery object

    We can currently query Table Storage using OData via an HTTP call or programmatically via a TableQuery object.

    But we can't query programmatically using an ODataQuery object (Microsoft.Rest.Azure.OData.ODataQuery). Why not?

    This would offer the benefits and ease of OData while also permitting business rules to be implemented.

    7 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  12. Azure Table Storage Events

    Allow events to subscribe to fromAzure Table Storage row create, update and delete operations. This would be liked the blob create and update today except for table storage rows.

    133 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →

    Our long-term goal is to have every service within Azure publish events, however, we have yet to begin work on this one.

    As always, we’re passing the feedback along, but make sure you reach out to Storage as well so they hear your voice directly!

  13. Deleting huge records of table storage

    Approach/way to delete huge number of records at time rather than deleting block of 1000 from a table storage.

    15 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    2 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  14. Bulk upload to table storage

    To have a fast way of uploading a large amount of data to a storage table. Trying to upload 5M entries (4M distinct partition keys) takes 20 to 60 hours (depending on where I run it) to upload single threaded.
    Batch operations don't help much as they are limited to one partition per batch. Relaxing the restriction on batches to allow multiple partition keys in one batch and making batches larger would both help.
    Alternatively uploading a CSV or JSON file as a single operation. It would be acceptable for this to create a new table.

    6 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  15. It would be Great if azure table storage can support lua scripting to get the data

    Right now there is no way to execute logic on the columns other than Row key and partition key. It would be great if we can write some logic using lua scrpts which can execute on the server itself and get the results.
    Because it take lot of effort to get the result and filter out at client side.

    1 vote
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  16. Allow "Prefer: return-no-content" for upserts (i.e. InsertOrMerge/InsertOrReplace)

    As per subject, allow to ignore echo on upsert operations too. There are cases where echo is not important when performing the upsert operation itself even if a row has been modified/replaced (ex: the table row will be read back later by another component).

    0 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  17. Have a TTL on Table Storage rows, so temporary data gets deleted

    Say you are storing details about logs / orders / page views. You may not care about them after 2 weeks / 1 month / 1 year.

    To save money it would be great to have a job run daily that deletes this data when it is deemed out of data by the system designer.

    AWS has this in DynamoDB where you create a column for an expiration date on tables that will accumulate data that might be temporary by nature. When you create a row you populate this column with the timestamp in the future you want this deleted.…

    192 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    1 comment  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  18. Preallocate table sizes

    In our load we typically create a new table and then fill it with ~100M entries quickly. After that we create some entries, change some, delete some, but the size is growing more slowly.

    The problem is that when we load the ~100M first elements from 0 elements, Azure storage becomes really slow. Very high response times. We think this is because it has to allocate lots of space and because it has to rebalance data between buckets many times in the start. After initial warmup, we can run much higher load against it then in the start.

    What we…

    0 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  19. Azure table storage should have a way to set the retention policy

    There should be a way to implement a Azure table storage retention policy such that anything older than n number of days can be deleted from table storage.

    119 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    3 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
  20. Metrics for the number of partitions

    Today we have a $MetricsCapacityBlob table, and we have metrics on transactions against tables and blobs: https://msdn.microsoft.com/en-us/library/azure/hh343264.aspx

    We would like get metrics for the number of partitions an Azure storage table is scaled over. This would help us scale the number of concurrent parallel reads we are doing against the table.

    4 votes
    Vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)
    You have left! (?) (thinking…)
    0 comments  ·  Tables  ·  Flag idea as inappropriate…  ·  Admin →
← Previous 1 3 4
  • Don't see your idea?

Feedback and Knowledge Base