Managing APIs‎ > ‎


Automation has value only in so far as there are no compromises in architecture (to integrate with existing systems), extensibility (to address elements not automated), and performance.

Live API Creator delivers on all of the best-practice patterns. It revisits relevant optimizations on each logic change. Live API Creator performance remains at a high level over maintenance iterations just as database management system (DBMS) optimizers maintain high performance by revising retrieval plans.

This page details how Live API Creator delivers enterprise-class performance.

Minimize Client Latency

Modern applications may often be required to support clients connected through high-latency cloud-based connections. The following are designed to minimize client-connection latency:

Rich Resource Objects

When retrieving objects for presentation, you can define resources that include multiple types, such as a Customer with their payments, orders, and items. These are delivered in a single response message, so that only a single trip is required.

This requirement is not fully satisfied by views. Views are often not updatable, and joins result in cartesian products when joining multiple child tables for the same parent. In our example, a Customer with five Payments and 10 Orders returns 50 rows. This is unreasonable for the client to decode and present.

For more information about defining resources, see Customize your API.

Leverage Relational Database Query Power

Each resource/sub-resource can be a full relational query that you can send in a single trip to the REST (and then database) server. Contrast this to less powerful retrieval engines, where the client must compute common requirements such as sums and counts. This drives the number of queries up n-fold, which can effect performance.


Large result sets can effect on the client, network, server, and database. You can truncate large results, with provisions to retrieve remaining results, such as when the end user scrolls, using pagination.

Pagination can be a complex problem. Consider a resource of Customer, Orders, and Items. If there are many orders, pagination must occur at this level, with provision for including the line items on subsequent pagination requests.

Batched Updates

Network considerations apply to update as well as retrieval. Consider many rows retrieved into a client, followed by an update. Clients can send only the changes, instead of the entire set of objects, using APIs. Clients can also send multiple row types (for example, an Order and Items) in a single message using APIs. This results in single, small update message.

Single Message update/refresh

Business logic consists not only of validations, but derivations. These derivations can often involve rows visible but not directly updated by the client. For example, saving an order might update the customer's balance. The updated balance must be reflected on the screen.

Clients typically solve this problem by re-retrieving the data. This is unfortunate in a number of ways. First, it is an extra client/server trip over a high latency network. And sometimes, it is difficult to program, for example when the order's key is system assigned. Live API Creator may know the computed key and may need to re-retrieve the entire rich result set.

Live API Creator solves this by returning the refresh information in the update response. The client can show the computations on related data by communicating a set of updates with a single message and use the response.

Server-enforced integrity minimizes client traffic

An infamous anti-pattern is to place business logic in the client. This does not ensure integrity (particularly when the clients are partners), and causes multiple client/server trips. For example, inserting a new Line Item may require business logic that updates the Order, the Customer, and the Product. If these are issued from the client, the result is four client/server trips when only one should be required.

Minimize DBMS Load

The logic engine minimizes the cost and number of SQL operations as described in the following sections.

Minimize Server/DB Latency

You can define the desired region for your API Creator. This minimizes latency for SQL operations issued by the API Server.

Update Logic Pruning eliminates SQLs

The logic engine prunes (eliminates) SQL operations where possible. For example:

    • Parent Reference Pruning. SQLs to access parent rows are averted if other other (local) expression values are unchanged. For example, if attribute-X is derived as attribute-Y * parent.attribute-1the retrieval for parent is eliminated if attribute-Y is not altered.
    • Cascade Pruning. If parent attributes are altered that are referenced by child logic, Live API Creator cascades the change to each child row. If the parent attribute is not altered, cascade overhead is pruned. In the same example above, the value of parent.attribute-1 is cascaded iff it is altered.

Update Adjustment Logic Eliminates Multi-level Aggregate SQLs

The logic engine minimizes the cost of SQL operations. For example:
  • Adjustment. For persisted sum/count aggregates, Live API Creator adjusts the parent based on the old/new values in the child by making a single-row update. Aggregate queries can be particularly costly when they cascade. For example, the Customer's balance is the sum of the order Amount, which is the sum of each Order's Lineitem amounts.
  • Adjustment pruning. Adjustment only occurs when the summed attribute changes, the foreign key changes, or the qualification condition changes. If none of these occur, parent access/chaining is averted.

Transaction Caching

Consider inserting an Order with multiple line items. Per the logic shown in the following image, Live API Creator must update ("adjust") the Order total and Customer balance for each line item:

Live API Creator must not retrieve these objects multiple times. This can incur substantial overhead and can make it difficult to ensure consistent results. Instead, Live API Creator maintains a cache for each transaction. All reads and writes go through the cache, and are flushed at the end of the transaction. This eliminates many SQLs, and ensures a consistent view of updated data.


Good performance dictates that data not be locked on retrieval. Concurrency is typically addressed by optimistic locking. API Server automates optimistic locking for all transactions. This can be based on a configured time-stamp column, or, if there is none, a hash of all resource attributes.

Transaction bracketing is automatic. API Server automatically bundles PUT/POST/DELETE requests (which may be comprised of multiple roles) into a transaction, including all logic-triggered updates.

GET: Optimistic Locking

A well-known pattern is optimistic locking. Acquiring locks while viewing data can reduce concurrency. Accordingly, locks are not acquired while processing GET requests. API Server ensures that updated data has not been altered since initial retrieval.

For more information about optimistic locking, see optimistic concurrency control on Wikipedia.

PUT, POST and DELETE: Leverage DBMS Locking and Transactions

Update requests are locked using DBMS Locking services. Consider the following cases:
      • Client updates. In accordance with optimistic locking, Live API Creator ensures that client-submitted rows have not been altered since retrieved. This is done by write-locking the row using a time stamp, or (if one is not defined) by a hash code of all retrieved data. This strategy means that a time stamp is not required. This process is done as the first part of the transaction, so optimistic locking issues are detected before SQL overhead is incurred.
      • Rule chaining. All rows processed in a transaction as a consequence of logic execution, such as adjusting parent sums or counts, are read locked. Write locks are acquired at the end of the transaction, during the "flush" phase. Many other transactions' read locks could have been acquired and released between the time of the initial read lock and the flush.
      • Referential integrity. Such data is read in accordance with DBMS policy.

Server Optimizations

The logic server promotes good performance.

Load Balanced Dynamic Clustering

Cloud-based Live API Creator implementations meet the load and provide for failover using the standard load balancer services by scaling as many server instances as required. Each server is stateless and incoming requests are load balanced over the set of running servers.

Meta Data Caching (Logic and Security)

API Creator does not require disk reads to process each request. It reads the logic and security information you specify into cache. It persists this cache over transactions until you alter your logic.

Direct Execution (No Code Generation)

Reactive logic is more expressive that procedural code. Compiling logic into JavaScript would therefore represent a significant performance issue. Reactive logic is therefore executed directly and not compiled into JavaScript.


Transparent information on system performance is an important requirement. 


You can view the logs of SQL and rule execution.

For more information about viewing the logs, see the View Logging Information.


You can obtain aggregate information on the Metrics page.

For more information about using the Metrics page, see Metrics.