For data warehouse staging tables or session state tables, when I don't want to change my application, I want to just specify CREATE TABLE AS UNLOGGED similar to how PostgreSQL does it. This way, my transaction logs aren't bloated, the data doesn't get copied around, and my inserts/updates/deletes are faster.
Upvotes: 127<=-=Jun 23 2015 8:37AM=-=>
I’m the first to post a useful comment. This must make me special.
Seriously though, this would be an excellent solution to having to create a new “scratchdb” to hold my interim ETL data. This would be a major plus in simplifying design of a high performance app.<=-=Jul 3 2015 5:04AM=-=>
In 2014, memory optimized tables, and delayed durability can be used help mitigate these Issues. However neither of this are always completely viable solutions. Brent’s proposed solution is likely the simplest way to achieve this with the least amount of unwanted impact. It is important to note that other platforms implement similar functionality as well. Notably Oracle.<=-=Nov 29 2016 3:58PM=-=>
There are so many good things about this suggestion. I am amazed that SQL does not have the capability to turn off logging for certain tables that you define as no logged. The constructs and restrictions for an unlogged table are there as we see in the in-memory offerings of non-durability. It can’t require much more to actually implement such a useful functionality. Another speed increase would occur for working tables. Improved ETL speeds.<=-=Aug 18 2017 7:25AM=-=>
Unlogged tables could be implemented by truncating them in case of a dirty database startup. That way any kinds of data structure corruptions would be cleared. (Without logging it’s surely very easy to corrupt indexes on power failure.)
This could be done per-index as well. Instead of a truncate (as proposed above) this would disable the index. That would allow for a consistent clustered index or heap with indexes that are considered expendable.<=-=Sep 20 2017 9:20AM=-=>
Hope this is still being considered. I am a DW architect and this would be really valuable. It would really help even with existing investments where we have etl’s written that dont scale well and one area they dont scale is these large dump and reload scenarios. We constantly have to watch log growth. I would think this would be valuable from an Azure perspective as well. The same operation may only take have the storage resource to achieve and reduce the cycles needed to manage the transaction.
It's a simple syntax extension and few 'if's to skip most of writes to the log. Only problem is that big data sh*t is now more important than anything else.
Just wondering if an alternative might be to allow applying this on a per-filegroup basis. This would not only allow complete tables to be non- (or minimally-) logged, but also table partitions that use a partition scheme to place data across filegroups. It allows creation/definition of one unit with this feature that can be used to contain any number of tables/partitions/indexes. Plus, it puts this durability feature "closer" to the physical file system, which kind of makes more sense to me.
Don Venardos commented
We have a batch processing system and we have entire databases that are used for eventual update of the application database. If a process doesn't complete successfully we truncate the involved tables and start over. Transaction log throughput is always an issue. High speed SSDs are required solely because SQL Server has to log every statement; huge waste of money.