This page provides best practices for optimizing the performance of serverless functions that connect to CockroachDB, such as AWS Lambda functions and Google Cloud Functions.
Use connection pools that persist across function invocations
Use connection pools to manage the lifecycle of database connections established by serverless functions. Connection pools check connection health and re-establish broken connections in the event of a communication error.
When creating connection pools in serverless functions:
- Set the maximum connection pool size to 1, unless your function is multi-threaded and establishes multiple concurrent requests to your database within a single function instance.
- Do not set a minimum idle connection count. The connection pool should be free to open connections as needed.
- If supported by your pooling library, set the maximum lifetime on the connection pool to 30 minutes.
If you plan to invoke a serverless function frequently, configure the function to persist connection pools across function invocations. This helps to limit the number of new connection attempts to the cluster. One way to do this is to initialize the connection pool variable outside the scope of the serverless function definition.
For example if an AWS Lambda function uses INSERT
to add data to a table and runs every few seconds, initialize the connection pool variable outside of the handler function definition, then define the connection pool in the handler only if the pool does not already exist.
Select either node.js or Python to continue.
The following node.js code implements this pattern:
const { Pool } = require('pg')
let pool
const insertRows = async (p) => {
const client = await p.connect()
try {
await client.query('INSERT INTO table (col1, col2) VALUES (val1, val2)')
} catch (err) {
console.log(err.stack)
} finally {
client.release()
}
}
exports.handler = async (context) => {
if (!pool) {
const connectionString = process.env.DATABASE_URL
pool = new Pool({
connectionString,
max: 1
})
}
await insertRows(pool)
}
The following Python code implements this pattern:
from psycopg2.pool import SimpleConnectionPool
pool = None
def query(p):
conn = p.getconn()
with conn.cursor() as cur:
cur.execute("INSERT INTO table (col1, col2) VALUES (val1, val2)")
conn.commit()
def lambda_handler(context):
global pool
if not pool:
pool = SimpleConnectionPool(0, 1, dsn=os.environ['DATABASE_URL'])
query(pool)
return
Use CockroachDB Standard
As a database-as-a-service, CockroachDB Standard abstracts away the complexity of deploying, scaling, and load-balancing your database.
To create a free CockroachDB Standard cluster:
- Create a CockroachDB Cloud account. If this is your first CockroachDB Cloud organization, it will be credited with $400 in free trial credits to get you started.
- On the Get Started page, click Create cluster.
- On the Select a plan page, select Standard.
- On the Cloud & Regions page, select a cloud provider (GCP or AWS).
- In the Regions section, select a region for the cluster. Refer to CockroachDB Cloud Regions for the regions where CockroachDB Standard clusters can be deployed. To create a multi-region cluster, click Add region and select additional regions.
- Click Next: Capacity.
On the Capacity page, keep the Provisioned capacity at the default value of 2 vCPUs.
Click Next: Finalize.
On the Finalize page, name your cluster. If an active free trial is listed in the right pane, you will not need to add a payment method, though you will need to do this by the end of the trial to maintain your organization's clusters.
Click Create cluster.
Your cluster will be created in a few seconds and the Create SQL user dialog will display.
Deploy serverless functions in the same region as your cluster
To minimize network latency, you should deploy your serverless functions in the same region as your cluster. If your serverless function provider does not offer deployments in the same region as your database deployment, choose the region nearest to the region where your database is deployed.