Showing posts with label MVCC. Show all posts
Showing posts with label MVCC. Show all posts

Monday 5 March 2018

zheap: a storage engine to provide better control over bloat

In the past few years, PostgreSQL has advanced a lot in terms of features, performance, and scalability for many-core systems.  However, one of the problems that many enterprises still complain is that its size increases over time which is commonly referred to as bloat. PostgreSQL has a mechanism known as autovacuum wherein a dedicated process (or set of processes) tries to remove the dead rows from the relation in an attempt to reclaim the space, but it can’t completely reclaim the space in many cases.  In particular, it always creates a new version of a tuple on an update which must eventually be removed by periodic vacuuming or by HOT-pruning, but still in many cases space is never reclaimed completely.  A similar problem occurs for tuples that are deleted. This leads to bloat in the database.  My colleague Robert Haas has discussed some such cases in his blog DO or UNDO - there is no VACUUM where the PostgreSQL heap tends to bloat and has also mentioned the solution (zheap: a new storage format for PostgreSQL) on which EnterpriseDB is working to avoid the bloat whenever possible.  The intent of this blog post is to elaborate on that work in some more detail and show some results.

This project has three major objectives:

1. Provide better control over bloat.  zheap will prevent bloat (a) by allowing in-place updates in common cases and (b) by reusing space as soon as a transaction that has performed a delete or non-in-place update has committed.  In short, with this new storage, whenever possible, we’ll avoid creating bloat in the first place.

2. Reduce write amplification both by avoiding rewrites of heap pages and by making it possible to do an update that touches indexed columns without updating every index.

3. Reduce the tuple size by (a) shrinking the tuple header and (b) eliminating most alignment padding.

In this blog post, I will mainly focus on the first objective (Provide better control over bloat) and leave other things for future blog posts on this topic.

In-place updates will be supported except when (a) the new tuple is larger than the old tuple and the increase in size makes it impossible to fit the larger tuple onto the same page or (b) some column is modified which is covered by an index that has not been modified to support “delete-marking”.  Note that the work to support delete-marking in indexes is yet to start and we intend to support it at least for btree indexes. For in-place updates, we have to write the old tuple in the undo log and the new tuple in the zheap which help concurrent readers to read the old tuple from undo if the latest tuple is not yet visible to them.

Deletes write the complete tuple in the undo record even though we could get away with just writing the TID as we do for an insert operation. This allows us to reuse the space occupied by the deleted record as soon as the transaction that has performed the operation commits. Basically, if the delete is not yet visible to some concurrent transaction, it can read the tuple from undo and in heap, we can immediately (as soon as the transaction commits) reclaim the space occupied by the record.

Below are some of the graphs that compare the size of heap and zheap table when the table is constantly updated and there is a concurrent long-running transaction.  To perform these tests, we have used pgbench to initialize the data (at scale factor 1000) and then use the simple-update test (which comprises of one-update, one-select, one-insert) to perform updates.  You can refer to the PostgreSQL manual for more about how to use pgbench. These tests have been performed on a machine with an x86_64 architecture, 2-sockets, 14-cores per socket, 2-threads per-core and has 64-GB RAM.  The non-default configuration for the tests is shared_buffers=32GB, min_wal_size=15GB, max_wal_size=20GB, checkpoint_timeout=1200, maintenance_work_mem=1GB, checkpoint_completion_target=0.9, synchoronous_commit = off. The below graphs show the size of the table on which this test has performed updates.





In the above test, we can see that the initial size of the table was 13GB in heap and 11GB in zheap.  After running the test for 25 minutes (out of which there was an open transaction for first 15-minutes), the size in heap grows to 16GB at 8-client count test and to 20GB at 64-client count test whereas for zheap the size remains at 11GB for both the client-counts at the end of the test. The initial size of zheap is lesser because the tuple header size is smaller in zheap. Now, certainly for first 15 minutes, autovacuum can’t reclaim any space due to the open transaction, but it can’t reclaim it even after the open transaction is ended. On the other hand, the size of zheap remains constant and all the undo data generated is removed within seconds of the transaction ending.

Below are some more tests where the transaction has been kept open for a much longer duration.

After running the test for 40 minutes (out of which there was an open transaction for first 30-minutes), the size in heap grows to 19GB at 8-client count test and to 26GB at 64-client count test whereas for zheap the size remains at 11GB for both the client-counts at the end of test and all the undo generated during test gets discarded within a few seconds after the open transaction is ended.

After running the test for 55 minutes (out of which there was an open transaction for first 45-minutes), the size in heap grows to 22GB at 8-client count test and to 28GB at 64-client count test whereas for zheap the size remains at 11GB for both the client-counts at the end of test and all the undo generated during test gets discarded within few seconds after the open transaction is ended.

So from all the above three tests, it is clear that the size of heap keeps on growing as the time for a concurrent long-running transaction is increasing.  It was 13GB at the start of the test, grew to 20GB, then to 26GB, then to 28GB at 64-client count test as the duration of the open transaction has increased from 15-mins to 30-mins and then to 45-mins. We have done a few more tests on the above lines and found that as the duration of open-transaction increases, the size of heap keeps on increasing whereas zheap remains constant.  For example, similar to above, if we keep the transaction open 60-mins in a 70-min test, the size of heap increases to 30GB. The increase in size also depends on the number of updates that are happening as part of the test.

The above results show not only the impact on size, but we also noticed that the TPS (transactions per second) in zheap is also always better (up to ~45%) for the above tests.  In similar tests on some other high-end machine, we see much better results with zheap with respect to performance. I would like to defer the details about raw-performance of zheap vs. heap to another blog post as this blog has already become big. I would like to mention that the above results don't mean that zheap will be better in all cases than heap. For example, rollbacks will be costlier in zheap. Just to be clear, this storage format is proposed as another format alongside current heap, so that users can decide which storage they want to use for their use case.

The code for this project has been published and is proposed as a feature for PG-12 to PostgreSQL community.  Thanks to Kuntal Ghosh for doing the performance tests mentioned in this blog post.

Tuesday 17 March 2015

Different Approaches for MVCC used in well known Databases

Database Management Systems uses MVCC to avoid the problem of
Writers blocking Readers and vice-versa, by making use of multiple
versions of data.

There are essentially two approaches to multi-version concurrency.

Approaches for MVCC
The first approach is to store multiple versions of records in the
database, and garbage collect records when they are no longer
required. This is the approach adopted by PostgreSQL and
Firebird/Interbase. SQL Server also uses somewhat similar approach
with the difference that old versions are stored in tempdb
(database different from main database).

The second approach is to keep only the latest version of data in
the database, but reconstruct older versions of data dynamically
as required by using undo. This is approach adopted by Oracle
and MySQL/InnoDB


MVCC in PostgreSQL
In PostgreSQL, when a row is updated, a new version (called a tuple)
of the row is created and inserted into the table. The previous version
is provided a pointer to the new version. The previous version is
marked “expired", but remains in the database until it is garbage collected.

In order to support multi-versioning, each tuple has additional data
recorded with it:
xmin - The ID of the transaction that inserted/updated the
row and created this tuple.
xmax - The transaction that deleted the row, or created a
new version of this tuple. Initially this field is null.

Transaction status is maintained in CLOG which resides in $Data/pg_clog.
This table contains two bits of status information for each transaction;
the possible states are in-progress, committed, or aborted.

PostgreSQL does not undo changes to database rows when a transaction
aborts - it simply marks the transaction as aborted in CLOG . A PostgreSQL
table therefore may contain data from aborted transactions.

A Vacuum cleaner process is provided to garbage collect expired/aborted
versions of a row. The Vacuum Cleaner also deletes index entries
associated with tuples that are garbage collected.

A tuple is visible if its xmin is valid and xmax is not.
“Valid" means “either committed or the current transaction".
To avoid consulting the CLOG table repeatedly, PostgreSQL maintains
status flags in the tuple that indicate whether the tuple is “known committed"
or “known aborted".


MVCC in Oracle
Oracle maintain old versions in rollback segments (also known as
'undo log').  A transaction ID is not a sequential number; instead, it is
made of a set of numbers that points to the transaction entry (slot) in a
Rollback segment header.

Rollback segments have the property that new transactions can reuse
storage and transaction slots used by older transactions that are
committed or aborted.
This automatic reuse facility enables Oracle to manage large numbers
of transactions using a finite set of rollback segments.

The header block of the rollback segment is used as a transaction table.
Here the status of a transaction is maintained (called System Change Number,
or SCN, in Oracle).  Rather than storing a transaction ID with each row
in the page, Oracle saves space by maintaining an array of unique transactions
IDs separately within the page, and stores only the offset of this array with
the row.

Along with each transaction ID, Oracle stores a pointer to the last undo record
created by the transaction for the page.  Not only are table rows stored in this
way, Oracle employs the same techniques when storing index rows. This is
one of the major difference between PostgreSQL and Oracle.

When an Oracle transaction starts, it makes a note of the current SCN. When
reading a table or an index page, Oracle uses the SCN number to determine if
the page contains the effects of transactions that should not be visible to the
current transaction.  Oracle checks the commit status of a transaction by
looking up the associated Rollback segment header, but, to save time, the first
time a transaction is looked up, its status is recorded in the page itself to avoid
future lookups.

If the page is found to contain the effects of invisible transactions, then Oracle
recreates an older version of the page by undoing the effects of each such
transaction. It scans the undo records associated with each transaction and
applies them to the page until the effects of those transactions are removed.
The new page created this way is then used to access the tuples within it.

Record Header in Oracle
A row header never grows, always a fixed size. For non-cluster tables,
the row header is 3 bytes.  One byte is used to store flags, one byte to
indicate if the row is locked (for example because it's updated but not
committed), and one byte for the column count.


MVCC in SQL Server
Snapshot isolation and read committed using row versioning are enabled
at the database level.  Only databases that require this option must enable
it and incur the overhead associated with it.

Versioning effectively starts with a copy-on-write mechanism that is
invoked when a row is modified or deleted. Row versioning–based
transactions can effectively "view" the consistent version of the data
from these previous row versions.

Row versions are stored within the version store that is housed within the
tempdb database.  More specifically, when a record in a table or index is
modified, the new record is stamped with the "sequence_number" of the
transaction that is performing the modification.
The old version of the record is copied to the version store, and the new record
contains a pointer to the old record in the version store.
If multiple long-running transactions exist and multiple "versions" are required,
records in the version store might contain pointers to even earlier versions of
the row.

Version store cleanup in SQL Server
SQL Server manages the version store size automatically, and maintains a
cleanup thread to make sure it does not keep versioned rows around longer
than needed.  For queries running under Snapshot Isolation, the version
store retains the row versions until the transaction that modified the data
completes and the transactions containing any statements that reference the
modified data complete.  For SELECT statements running under
Read Committed Snapshot Isolation, a particular row version is no longer
required, and is removed, once the SELECT statement has executed.

If tempdb actually runs out of free space, SQL Server calls the cleanup
function and will increase the size of the files, assuming we configured the
files for auto-grow.  If the disk gets so full that the files cannot grow,
SQL Server will stop generating versions. If that happens, any snapshot
query that needs to read a version that was not generated due to space
constraints will fail.

Record Header in SQL Server
 4 bytes long
 - two bytes of record metadata (record type)
 - two bytes pointing forward in the record to the NULL bitmap. This is
   offset to some actual data in record (fixed length columns).

Versioning tag - this is a 14-byte structure that contains a timestamp
plus a pointer into the version store in tempdb.
Here timestamp is trasaction_seq_number, the only time that rows get
versioning info added to record is when it’s needed to support a
versioning operation.

As the versioning information is optional, I think that is the reason
they could store this info in index records as well without much
impact.

Database PostgreSQL Oracle SQL Server
Storage for Old Versions In the main Segment (Heap/Index) In the separate segment (Rollback Segment/Undo) In the separate database (tempdb – known as version store)
Size of Tuple Header (bytes) 24 3 Fixed – 4 Variable - 14
Clean up Vacuum System Monitor Process (SMON) Ghost Cleanup task

Conclusion of study
As other databases store version/visibility information in index, that makes
index cleanup easier (as it is no longer tied to heap for visibility information).
The advantage for not storing the visibility information in index is that for
Delete operations, we don't need to perform an index delete and probably the
size of index record could be somewhat smaller.

Oracle and probably MySQL (Innodb) needs to write the record in undo
segment for Insert statement whereas in PostgreSQL/SQL Server, the new
record version is created only when a row is modified or deleted.

Only changed values are written to undo whereas PostgreSQL/SQL Server
creates a complete new tuple for modified row.  This avoids bloat in the main
heap segment.

Both Oracle and SQL Server has some way to restrict the growth of version
information whereas PostgreSQL/PPAS doesn't have any way.