############# Release Notes ############# 5.1.7 ===== Fixes ----- * fdbdr switch could take a long time to complete if the two clusters were not created at the same time. 5.1.6 ===== Fixes ----- * Expiring a backup could cause the fdbbackup process to hang indefinitely. 5.1.5 ===== Fixes ----- * The consistency check calculated the size of the database inefficiently. * Could not create new directories with the Python and Ruby implementations of the directory layer. * fdbcli could erroneously report that it was incompatible with some processes in the cluster. * The commit commmand in fdbcli did not wait for the result of the commit before continuing to the next command. Other Changes ------------- * renamed the ``multi_dc`` replication mode to ``three_datacenter``. 5.1.4 ===== Fixes ----- * The master would recover twice when a new cluster controller was elected. * The cluster controller could be elected on a storage process after restarting all processes in a cluster. * Allow backup expiration to succeed if the backup is too new to be restorable. * Process metric collection in status could sometimes fail. 5.1.3 ===== Fixes ----- * The backup agents ran out of memory when heavily loaded. * Storage servers were not marked as failed until after their files were deleted. * The consistency check requested too many shards in the same request from the proxy. * Client knobs for blob send/receive were reversed in meaning. * fdbbackup status provides more information on reported errors. 5.1.2 ===== Fixes ----- * Backup did not incrementally delete mutations from the mutation log. * fdbcli status misreported completed backup/DR as running. * Stopped producing the "fdbblob" alias for fdbbackup. 5.1.1 ===== Fixes ----- * Bindings: Disposing a transaction during a commit resulted in a broken promise from ``get_versionstamp``. * Bindings: Calling ``create_cluster`` before initializing the network would result in a crash. * Latest restorable version of a running backup was not being updated in backup layer status.  * Backup layer status would sometimes show an error or an incorrect value for the recent blob bandwidth metric. * Backup deletions were not deleting all of the files related to the backup. * The cluster controller was sharing a process with the master even when better locations existed. * Blob credentials files were being opened in read-write mode. * Sometimes fdbbackup did not write log files even when ``--log`` was passed on the command line. Performance ----------- * Backup file uploads will respond to server-side throttling in the middle of a chunk upload rather than only between chunks. 5.1.0 ===== Features -------- * Backups continually write snapshots at a configured interval, reducing restore times for long running backups. * Old backup snapshots and associated logs can be deleted from a backup. * Backup files are stored in a deep folder structure. * Restore allows you to specify an approximate time instead of a version. * Backup and DR agents can be paused from ``fdbbackup`` and ``fdbdr`` respectively. * Added byte min and byte max atomic operations. * The behavior of atomic "and" and "min" operations has changed when the key doesn't exist in the database. If the key is not present, then an "and" or "min" is now equivalent to a set. * Exception messages are more descriptive. * Clients can view a sample of committed mutations. * When switching to a DR cluster, the commit versions on that cluster will be higher than the versions on the primary cluster. * Added a read-only lock aware transaction option. * Automatically suppress trace log events which occur too frequently. * Added a new ``multi_dc`` replication mode designed for cross data center deployments. Performance ----------- * The data distribution algorithm can split the system keyspace. * Improved load balancing when servers are located across multiple data centers. * Improved read latencies after recoveries by only making servers responsible for keys if they have finished copying the data from other servers. * Improved recovery times by waiting until a process has finished recovering its data from disk before letting it be recruited for new roles. * Improved 95% read version latencies by reducing the number of logs required to confirm that a proxy has not been replaced. * Stopped the transaction logs from copying unneeded data after multiple successive recoveries. * Significantly improved the performance of range reads. * The cluster controller prefers to be recruited on stateless class processes and will not put other stateless roles on the same process. * Excluded servers no longer take on stateless roles. * Stateless roles will be proactively moved off of excluded processes. * Dramatically improved restore speeds of large disk queue files. * Clients get key location information directly from the proxies, significantly reducing the latency of worst case read patterns. * Reduced the amount of work incompatible clients generate for coordinators and the cluster controller. In particular, this reduces the load on the cluster caused by using the multi-version client. * Pop partially recovered mutations from the transaction log to save disk space after multiple successive recoveries. * Stopped using network checksums when also using TLS. * Improved cluster performance after recoveries by prioritizing processing new mutations on the logs over copying data from the previous logs. * Backup agents prefer reading from servers in the same data center. Fixes ----- * New databases immediately configured into ``three_data_hall`` would not respect the ``three_data_hall`` constraint. * Exclude considered the free space of non-storage processes when determining if an exclude was safe. * ``fdbmonitor`` failed to start processes after fork failure. * ``fdbmonitor`` will only stop processes when the configuration file is deleted if ``kill_on_configuration_change`` is set. * The data distribution algorithm would hang indefinitely when asked to build storage teams with more than three servers. * Mutations from a restore could continue to be applied for a very short amount of time after a restore was successfully aborted. Extremely Rare Bug Fixes ------------------------ * Storage servers did not properly handle rollbacks to versions before their restored version. * A newly recruited transaction log configured with the memory storage engine could crash on startup. * The data distribution algorithm could split a key range so that one part did not have any data. * Storage servers could update to an incorrect version after a master failure. * The disk queue could report a commit as successful before the sync of the disk queue files completed. * A disk queue which was shutdown before completing its first commit could become unrecoverable. Status ------ * If a cluster cannot recover because too many transaction logs are missing, status lists the missing logs. * The list of connected clients includes their trace log groups. * Status reports if a cluster is being used as a DR destination. Bindings -------- * API version updated to 510. See the :ref:`API version upgrade guide ` for upgrade details. * Add versionstamp support to the Tuple layer in Java and Python. Java ---- * API versions prior to 510 are no longer supported. * The bindings have been moved to the package ``com.apple.foundationdb`` from ``com.apple.cie.foundationdb``. * We no longer offer a version of the Java bindings with our custom futures library or support Java versions less than 8. The bindings that use completable futures have been renamed to ``fdb-java``. * Finalizers now log a warning to stderr if an object with native resources is not closed. This can be disabled by calling ``FDB.setUnclosedWarning()``. * Implementers of the ``Disposable`` interface now implement ``AutoCloseable`` instead, with ``close()`` replacing ``dispose()``. * ``AutoCloseable`` objects will continue to be closed in object finalizers, but this behavior is being deprecated. All ``AutoCloseable`` objects should be explicitly closed. * ``AsyncIterator`` is no longer closeable. * ``getBoundaryKeys()`` now returns a ``CloseableAsyncIterable`` rather than an ``AsyncIterator``. * ``Transaction.getRange()`` no longer initiates a range read immediately. Instead, the read is issued by a call to ``AsyncIterable.asList()`` or ``AsyncIterable.iterator()``. * Added ``hashCode()`` method to ``Subspace``. * Added thread names to threads created by our default executor. * The network thread by default will be named ``fdb-network-thread``. * Added an overload of ``whileTrue()`` which takes a ``Supplier``. * Added experimental support for enabling native callbacks from external threads. * Fix: Converting the result of ``Transaction.getRange()`` to a list would issue an unneeded range read. * Fix: range iterators failed to close underlying native resources. * Fix: various objects internal to the bindings were not properly closed. Other Changes ------------- * Backups made prior to 5.1 can no longer be restored. * Backup now uses a hostname in the connection string instead of a list of IPs when backing up to blob storage. This hostname is resolved using DNS. * ``fdbblob`` functionality has been moved to ``fdbbackup``. * ``fdbcli`` will warn the user if it is used to connect to an incompatible cluster. * Cluster files that do not match the current connection string are no longer corrected automatically. * Improved computation of available memory on pre-3.14 kernels. * Stopped reporting blob storage connection credentials in ``fdbbackup`` status output. Earlier release notes --------------------- * :doc:`5.0 (API Version 500) ` * :doc:`4.6 (API Version 460) ` * :doc:`4.5 (API Version 450) ` * :doc:`4.4 (API Version 440) ` * :doc:`4.3 (API Version 430) ` * :doc:`4.2 (API Version 420) ` * :doc:`4.1 (API Version 410) ` * :doc:`4.0 (API Version 400) ` * :doc:`3.0 (API Version 300) ` * :doc:`2.0 (API Version 200) ` * :doc:`1.0 (API Version 100) ` * :doc:`Beta 3 (API Version 23) ` * :doc:`Beta 2 (API Version 22) ` * :doc:`Beta 1 (API Version 21) ` * :doc:`Alpha 6 (API Version 16) ` * :doc:`Alpha 5 (API Version 14) `