Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

PostgreSQL for SQL Server Engineers

For .NET engineers who know: SQL Server, T-SQL, SSMS, Windows Authentication, and datetime2/uniqueidentifier types You’ll learn: Where PostgreSQL and SQL Server are functionally identical, where they differ in syntax and convention, and the specific gotchas that will slow you down on your first project Time: 15-20 min read


The .NET Way (What You Already Know)

You know SQL Server deeply. You write T-SQL fluently, you navigate SSMS without thinking, and you rely on Windows Authentication for local development because it just works. You know that IDENTITY(1,1) generates surrogate keys, that uniqueidentifier stores GUIDs, that nvarchar(MAX) holds Unicode text, and that datetime2(7) gives you nanosecond precision with UTC correctness when you need it.

You also know the SQL Server ecosystem: execution plans in SSMS, SET STATISTICS IO ON, sp_WhoIsActive, columnstore indexes, and how to configure a linked server. That tooling experience is real and valuable. None of it transfers directly to PostgreSQL — but the underlying concepts do.

PostgreSQL is not a simplified version of SQL Server. It is a different full-featured RDBMS with a different lineage (university research vs. Microsoft acquisition), different defaults, and in several areas — particularly JSON, extensibility, and standards compliance — it is ahead of SQL Server. The mental model shift is smaller than moving from a relational database to MongoDB, but larger than moving between SQL Server versions.


The PostgreSQL Way

Connection Strings and Authentication

SQL Server on Windows defaults to Windows Authentication (Integrated Security=True). You log in as your Windows user, no password in the connection string. PostgreSQL has no concept of Windows Authentication. Every connection uses a database username and password (or SSL certificates, LDAP, etc.). For local development you will almost always use postgres as the superuser with a password you set during installation.

SQL Server connection string:

Server=localhost;Database=MyApp;Integrated Security=True;TrustServerCertificate=True;

PostgreSQL connection string (libpq format, used by most drivers):

postgresql://myuser:mypassword@localhost:5432/myapp

PostgreSQL connection string (.NET Npgsql):

Host=localhost;Port=5432;Database=myapp;Username=myuser;Password=mypassword;

Render (cloud PostgreSQL) connection string:

postgresql://myuser:mypassword@dpg-xxxxx.oregon-postgres.render.com:5432/myapp_xxxx

Render provides internal and external connection strings. Use the internal URL from services running on Render (same datacenter, no TLS overhead). Use the external URL from your local machine or CI. Both use ?sslmode=require appended when connecting from external clients.

# External (local dev or CI)
postgresql://myuser:mypassword@dpg-xxxxx.oregon-postgres.render.com/myapp_xxxx?sslmode=require

# Internal (from another Render service)
postgresql://myuser:mypassword@dpg-xxxxx/myapp_xxxx

psql vs SSMS

SSMS is a rich GUI. psql is a command-line client. Most developers also use pgAdmin (GUI, free), TablePlus, or the Database panel in VS Code with the PostgreSQL extension.

SSMS Actionpsql Equivalent
Connect to serverpsql postgresql://user:pass@host/db
List databases\l
Switch database\c dbname
List tables\dt or \dt schema.*
Describe table\d tablename
List schemas\dn
Show execution planEXPLAIN ANALYZE SELECT ...
Run a script file\i /path/to/file.sql
Toggle expanded output\x
Quit\q

psql also has tab completion, command history, and \e to open your query in $EDITOR. For production queries and debugging you will spend time in psql. Get comfortable with \d — it is your metadata workhorse.

Schemas vs. Databases

This is one of the most disorienting conceptual differences.

In SQL Server, schemas are namespaces inside a database (dbo.Orders, sales.Orders). Schemas are cheap to create and commonly used to organize tables by domain. A SQL Server instance can have multiple databases, each with its own set of schemas.

In PostgreSQL, the word “schema” means the same thing — a namespace inside a database. But the architecture difference is in how you use them:

  • In SQL Server, cross-database queries are common: SELECT * FROM OtherDb.dbo.Orders.
  • In PostgreSQL, cross-database queries within a single server are not natively supported. Each database is fully isolated. If you need cross-database access you use dblink or postgres_fdw (foreign data wrappers), which is heavier than SQL Server’s linked servers.

The practical consequence: PostgreSQL shops tend to put everything in one database and use schemas for organization. SQL Server shops often have many databases on one instance.

-- PostgreSQL: create a schema and use it
CREATE SCHEMA sales;
CREATE TABLE sales.orders (id SERIAL PRIMARY KEY, total NUMERIC(12, 2));
SELECT * FROM sales.orders;

-- The default schema is "public"
CREATE TABLE products (id SERIAL PRIMARY KEY, name TEXT);
-- This is equivalent to:
CREATE TABLE public.products (id SERIAL PRIMARY KEY, name TEXT);

The search_path setting controls which schemas PostgreSQL checks when you use an unqualified table name, analogous to SQL Server’s default schema on a user.

SET search_path TO sales, public;
-- Now "orders" resolves to sales.orders, "products" to public.products

Data Type Mapping

SQL Server TypePostgreSQL TypeNotes
NVARCHAR(n)VARCHAR(n)PostgreSQL is always Unicode (UTF-8). No N prefix needed.
NVARCHAR(MAX)TEXTUnlimited length. TEXT and VARCHAR have identical performance in PG.
VARCHAR(n)VARCHAR(n)Same, but SQL Server VARCHAR is not Unicode by default.
INTINTEGER or INT4Identical 4-byte integer.
BIGINTBIGINT or INT8Identical 8-byte integer.
SMALLINTSMALLINT or INT2Identical.
BITBOOLEANPG uses TRUE/FALSE. SQL Server uses 1/0.
DECIMAL(p,s)NUMERIC(p,s)Functionally identical. PG also has DECIMAL as an alias.
FLOATDOUBLE PRECISION8-byte IEEE 754.
REALREAL4-byte IEEE 754.
UNIQUEIDENTIFIERUUIDPG stores UUID as 16 bytes, not a string. Use gen_random_uuid().
DATETIME2TIMESTAMPTZAlways store timestamps with time zone in PG. See gotchas.
DATETIMETIMESTAMPWithout time zone — avoid for most use cases.
DATEDATEIdentical.
TIMETIMEAvailable, but TIMETZ (time with time zone) is rarely useful.
BINARY/VARBINARYBYTEABinary data.
XMLXMLPG has XML support, but JSON/JSONB is more idiomatic.
NTEXT (deprecated)TEXT
MONEYNUMERIC(19,4)PG has a MONEY type but it is locale-dependent; avoid it.
ROWVERSION / TIMESTAMPNo direct equivalentUse xmin system column or an explicit updated_at column.
HIERARCHYIDNo equivalentModel with ltree extension or adjacency list.
GEOGRAPHY/GEOMETRYGEOMETRY via PostGISPostGIS is a mature extension, widely used.

Auto-increment / Identity:

-- SQL Server
CREATE TABLE orders (
    id INT IDENTITY(1,1) PRIMARY KEY,
    total DECIMAL(12,2) NOT NULL
);

-- PostgreSQL — old style (SERIAL is a shorthand, not a true type)
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    total NUMERIC(12, 2) NOT NULL
);

-- PostgreSQL — modern style (SQL standard, preferred from PG 10+)
CREATE TABLE orders (
    id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    total NUMERIC(12, 2) NOT NULL
);

-- UUID primary key (common in distributed systems)
CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    total NUMERIC(12, 2) NOT NULL
);

SERIAL creates a sequence and sets the default. GENERATED ALWAYS AS IDENTITY is the SQL standard and prevents accidental manual inserts to the identity column (unless you use OVERRIDING SYSTEM VALUE). Both work. New projects should prefer GENERATED ALWAYS AS IDENTITY.

JSON Support: PostgreSQL Is Ahead

SQL Server added JSON support in 2016 and it remains a string-parsing feature. There are no JSON-specific indexes, and querying JSON is done through functions like JSON_VALUE() that parse the string on each call.

PostgreSQL has two JSON types:

  • JSON — stores the JSON string verbatim, validates it, parses on every access
  • JSONB — stores the parsed binary representation. Supports indexing. Faster for reads. This is what you almost always want.
-- SQL Server JSON (stored as NVARCHAR, not a real type)
CREATE TABLE products (
    id INT IDENTITY(1,1) PRIMARY KEY,
    attributes NVARCHAR(MAX) CHECK (ISJSON(attributes) = 1)
);
-- Query — parses the string every time
SELECT JSON_VALUE(attributes, '$.color') FROM products WHERE id = 1;

-- PostgreSQL JSONB
CREATE TABLE products (
    id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    attributes JSONB NOT NULL DEFAULT '{}'
);
-- Query — reads from binary representation
SELECT attributes->>'color' FROM products WHERE id = 1;

-- Index a specific field inside JSONB
CREATE INDEX idx_products_color ON products ((attributes->>'color'));

-- GIN index for arbitrary key lookups
CREATE INDEX idx_products_attrs ON products USING GIN (attributes);

-- Query using the GIN index (does this JSONB contain this key-value?)
SELECT * FROM products WHERE attributes @> '{"color": "red"}';

-- Update a single field without replacing the whole document
UPDATE products
SET attributes = attributes || '{"weight": 1.5}'::jsonb
WHERE id = 1;

-- Remove a key
UPDATE products
SET attributes = attributes - 'weight'
WHERE id = 1;

The -> operator extracts a JSON value as JSONB. The ->> operator extracts it as text. The @> operator checks containment (and uses GIN indexes). This is significantly more capable than SQL Server’s JSON functions.

SQL Server full-text search requires a Full-Text Catalog, CONTAINS(), and separate installation of the Full-Text Search feature. PostgreSQL full-text search is built in and uses tsvector/tsquery types.

-- PostgreSQL full-text search

-- Stored tsvector for performance
ALTER TABLE articles ADD COLUMN search_vector TSVECTOR;
UPDATE articles SET search_vector = to_tsvector('english', title || ' ' || body);
CREATE INDEX idx_articles_fts ON articles USING GIN (search_vector);

-- Keep it updated with a trigger
CREATE FUNCTION articles_search_trigger() RETURNS trigger AS $$
BEGIN
  NEW.search_vector := to_tsvector('english', NEW.title || ' ' || NEW.body);
  RETURN NEW;
END
$$ LANGUAGE plpgsql;

CREATE TRIGGER articles_search_update
  BEFORE INSERT OR UPDATE ON articles
  FOR EACH ROW EXECUTE FUNCTION articles_search_trigger();

-- Query
SELECT id, title,
       ts_rank(search_vector, query) AS rank
FROM articles,
     to_tsquery('english', 'database & performance') query
WHERE search_vector @@ query
ORDER BY rank DESC
LIMIT 20;

For more advanced needs (fuzzy matching, typo tolerance), use the pg_trgm extension which adds trigram-based similarity search and supports ILIKE indexes.

CTEs and Window Functions

Good news: CTEs and window functions are nearly identical between SQL Server and PostgreSQL. The syntax maps almost one-to-one.

-- SQL Server CTE
WITH ranked AS (
    SELECT
        customer_id,
        total,
        ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
    FROM orders
)
SELECT * FROM ranked WHERE rn = 1;

-- PostgreSQL CTE — identical syntax
WITH ranked AS (
    SELECT
        customer_id,
        total,
        ROW_NUMBER() OVER (PARTITION BY customer_id ORDER BY total DESC) AS rn
    FROM orders
)
SELECT * FROM ranked WHERE rn = 1;

One difference: in PostgreSQL 12 and earlier, CTEs were “optimization fences” — the query planner could not push predicates into them. Since PostgreSQL 12, non-recursive CTEs are inlined by default (same as SQL Server). You can force the old behavior with WITH cte AS MATERIALIZED (...).

PostgreSQL recursive CTEs use identical syntax (WITH RECURSIVE).

Stored Procedures vs. Functions

SQL Server has both stored procedures and user-defined functions with fairly clear distinctions. PostgreSQL blurs this line: historically PostgreSQL had only functions, and CREATE PROCEDURE was added in PostgreSQL 11 (with different semantics than SQL Server procedures).

FeatureSQL Server SPPostgreSQL FunctionPostgreSQL Procedure
Returns a result setYesYes (via RETURNS TABLE or SETOF)No
Returns scalar valueNoYesNo
Can manage transactionsYesNoYes (PG 11+)
Called with EXECYesNoCALL proc_name()
Called with SELECTNoYesNo
LanguagesT-SQLPL/pgSQL, SQL, PL/Python, PL/v8, etc.PL/pgSQL, SQL

In practice, PostgreSQL shops use functions for almost everything, reserving procedures for cases where you need explicit COMMIT/ROLLBACK inside the routine.

-- SQL Server stored procedure
CREATE PROCEDURE GetOrdersByCustomer
    @CustomerId INT
AS
BEGIN
    SELECT * FROM orders WHERE customer_id = @CustomerId;
END;

EXEC GetOrdersByCustomer @CustomerId = 42;

-- PostgreSQL equivalent — function returning a table
CREATE OR REPLACE FUNCTION get_orders_by_customer(p_customer_id INTEGER)
RETURNS TABLE (id INTEGER, total NUMERIC, created_at TIMESTAMPTZ)
LANGUAGE SQL
AS $$
    SELECT id, total, created_at
    FROM orders
    WHERE customer_id = p_customer_id;
$$;

SELECT * FROM get_orders_by_customer(42);

Performance Tuning: EXPLAIN ANALYZE

EXPLAIN ANALYZE is the PostgreSQL equivalent of SQL Server’s “Include Actual Execution Plan.” It shows the query plan with estimated and actual row counts, execution time, and which indexes were used.

-- SQL Server
-- Turn on in SSMS: Query > Include Actual Execution Plan (or Ctrl+M)
SELECT * FROM orders WHERE customer_id = 42;

-- PostgreSQL
EXPLAIN ANALYZE
SELECT * FROM orders WHERE customer_id = 42;

Sample output:

Index Scan using idx_orders_customer on orders  (cost=0.43..8.45 rows=3 width=72) (actual time=0.021..0.024 rows=3 loops=1)
  Index Cond: (customer_id = 42)
Planning Time: 0.123 ms
Execution Time: 0.041 ms

Use EXPLAIN (ANALYZE, BUFFERS, FORMAT JSON) for full detail including cache hit ratios — the JSON format renders nicely in tools like explain.dalibo.com (paste the JSON for a visual tree).

Key plan node types you will encounter:

  • Seq Scan — full table scan (like a Table Scan in SQL Server). Usually bad on large tables.
  • Index Scan — uses a B-tree index to fetch rows. Good.
  • Index Only Scan — all needed columns are in the index (covering index). Best.
  • Bitmap Heap Scan — combines multiple index results then fetches heap pages. Efficient for low-selectivity queries.
  • Hash Join — joins using a hash table. Common for large tables without usable join indexes.
  • Nested Loop — like SQL Server’s Nested Loops. Good when outer table is small.

SQL Server has sp_WhoIsActive for monitoring active queries. PostgreSQL has pg_stat_activity:

-- Show active queries longer than 5 seconds
SELECT pid, now() - query_start AS duration, query, state
FROM pg_stat_activity
WHERE state != 'idle'
  AND query_start < now() - interval '5 seconds'
ORDER BY duration DESC;

-- Kill a query
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid = 12345;

Key Differences

ConceptSQL ServerPostgreSQL
AuthenticationWindows Auth or SQL AuthPassword, certificate, LDAP (no Windows Auth)
Default clientSSMSpsql, pgAdmin, TablePlus
SchemasNamespaces within a databaseSame — but cross-database queries are not native
String typeNVARCHAR, VARCHAR (separate)TEXT or VARCHAR (always UTF-8)
BooleanBIT (0/1)BOOLEAN (TRUE/FALSE)
GUIDUNIQUEIDENTIFIERUUID
TimestampsDATETIME2TIMESTAMPTZ (always use with timezone)
Auto-incrementIDENTITY(1,1)SERIAL or GENERATED ALWAYS AS IDENTITY
JSONString-based functionsJSONB binary type with GIN indexes
Case sensitivityCase-insensitive by defaultCase-sensitive by default
String quotingSingle or double quotesSingle quotes only; double quotes for identifiers
Execution planSSMS GUIEXPLAIN ANALYZE
Activity monitorsp_WhoIsActivepg_stat_activity
Top N rowsSELECT TOP 10SELECT ... LIMIT 10
PaginationOFFSET ... FETCH NEXT ... ROWS ONLYLIMIT ... OFFSET ...
String concatenation+||
Null-safe equalityIS NULL / IS NOT NULLSame, plus IS DISTINCT FROM / IS NOT DISTINCT FROM

Gotchas for .NET Engineers

1. Case sensitivity in identifiers and string comparisons

SQL Server is case-insensitive by default for both identifiers and data. PostgreSQL is case-sensitive for data and for unquoted identifiers — but here is the trap: unquoted identifiers are silently lowercased.

-- This creates a column named "customerid" (lowercased)
CREATE TABLE orders (CustomerId INTEGER);

-- This creates a column named exactly "CustomerId" (quoted)
CREATE TABLE orders ("CustomerId" INTEGER);

-- Now you must always quote it
SELECT "CustomerId" FROM orders; -- works
SELECT CustomerId FROM orders;   -- works (lowercased to customerid, matches)
SELECT CUSTOMERID FROM orders;   -- also works (also lowercased)

The convention in PostgreSQL is to use snake_case for identifiers and never quote them. If you create tables with quoted mixed-case names (perhaps via a tool that mirrors your C# class names), you will regret it — every query will need quotes.

For string data comparisons, case sensitivity depends on the collation, but the default collation is case-sensitive:

-- PostgreSQL — case sensitive by default
SELECT * FROM users WHERE email = 'Admin@Example.com'; -- does NOT match 'admin@example.com'

-- Use ILIKE for case-insensitive pattern matching (SQL Server's LIKE is CI by default)
SELECT * FROM users WHERE email ILIKE 'admin@example.com';

-- Or LOWER()
SELECT * FROM users WHERE LOWER(email) = LOWER('Admin@Example.com');

2. TIMESTAMPTZ vs TIMESTAMP — always use TIMESTAMPTZ

SQL Server’s DATETIME2 stores a date and time with no timezone information, but by convention you store UTC. PostgreSQL has the same pattern available (TIMESTAMP WITHOUT TIME ZONE), but PostgreSQL’s TIMESTAMPTZ (TIMESTAMP WITH TIME ZONE) is fundamentally different: it converts the stored value to UTC on insert (based on the session’s TimeZone setting) and converts back on retrieval.

-- SQL Server convention: store UTC, no enforcement
INSERT INTO events (created_at) VALUES (GETUTCDATE()); -- you must remember to use GETUTCDATE()

-- PostgreSQL TIMESTAMPTZ: the database handles UTC conversion
SET TIME ZONE 'America/New_York';
INSERT INTO events (created_at) VALUES (NOW()); -- NOW() is timestamptz, stored as UTC

-- Retrieve: PostgreSQL converts from UTC to the session timezone automatically
SELECT created_at FROM events; -- shows Eastern time if session is Eastern

The gotcha is that TIMESTAMP (without timezone) is a lie: PostgreSQL stores the literal date/time bytes with no timezone semantics. If different clients write to a TIMESTAMP column using different timezones, your data is inconsistent with no way to recover. Always use TIMESTAMPTZ.

3. BOOLEAN is not BIT — do not use 1/0

If you generate SQL dynamically or write raw SQL in your application, you may reach for 1 and 0 for boolean values. PostgreSQL will reject them for BOOLEAN columns.

-- SQL Server
INSERT INTO users (is_active) VALUES (1);  -- works, BIT column
UPDATE users SET is_active = 0 WHERE id = 1; -- works

-- PostgreSQL — these fail
INSERT INTO users (is_active) VALUES (1);   -- ERROR: column is of type boolean
UPDATE users SET is_active = 0 WHERE id = 1; -- ERROR

-- PostgreSQL — correct
INSERT INTO users (is_active) VALUES (TRUE);
INSERT INTO users (is_active) VALUES ('true');  -- PostgreSQL accepts this
UPDATE users SET is_active = FALSE WHERE id = 1;

4. String concatenation uses ||, not +

-- SQL Server
SELECT first_name + ' ' + last_name AS full_name FROM users;
-- NULL + anything = NULL in SQL Server

-- PostgreSQL
SELECT first_name || ' ' || last_name AS full_name FROM users;
-- NULL || anything = NULL — same behavior

-- PostgreSQL CONCAT() ignores NULLs
SELECT CONCAT(first_name, ' ', last_name) AS full_name FROM users;

5. No implicit transaction for DDL

In SQL Server, DDL statements (like ALTER TABLE) can be wrapped in a transaction and rolled back. This is extremely useful for safe migrations. In PostgreSQL, DDL is also transactional — but many developers do not realize this because it is less commonly emphasized. The gotcha is ALTER TABLE ... ADD COLUMN with a non-null default in older PostgreSQL versions: it used to rewrite the entire table. Since PostgreSQL 11, adding a column with a constant default is instant. But adding a column with a volatile default (like NOW()) still requires a table rewrite.

6. SELECT TOP becomes LIMIT

-- SQL Server
SELECT TOP 10 * FROM orders ORDER BY created_at DESC;

-- PostgreSQL
SELECT * FROM orders ORDER BY created_at DESC LIMIT 10;

-- Pagination
-- SQL Server (2012+)
SELECT * FROM orders ORDER BY created_at DESC
OFFSET 20 ROWS FETCH NEXT 10 ROWS ONLY;

-- PostgreSQL
SELECT * FROM orders ORDER BY created_at DESC
LIMIT 10 OFFSET 20;

Hands-On Exercise

This exercise installs PostgreSQL locally and migrates a small SQL Server schema.

Setup:

# macOS
brew install postgresql@16
brew services start postgresql@16

# Create a database and user
psql postgres -c "CREATE USER myapp WITH PASSWORD 'secret';"
psql postgres -c "CREATE DATABASE myapp OWNER myapp;"

# Connect
psql postgresql://myapp:secret@localhost/myapp

Exercise — translate this SQL Server schema to PostgreSQL:

-- SQL Server (given)
CREATE TABLE customers (
    customer_id    INT             IDENTITY(1,1) PRIMARY KEY,
    external_ref   UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID(),
    full_name      NVARCHAR(200)   NOT NULL,
    email          NVARCHAR(320)   NOT NULL,
    is_active      BIT             NOT NULL DEFAULT 1,
    created_at     DATETIME2(7)    NOT NULL DEFAULT SYSUTCDATETIME(),
    metadata       NVARCHAR(MAX),  -- stores JSON
    CONSTRAINT uq_customers_email UNIQUE (email)
);

CREATE INDEX IX_customers_email ON customers (email);

Solution — PostgreSQL equivalent:

CREATE TABLE customers (
    customer_id    INTEGER         GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
    external_ref   UUID            NOT NULL DEFAULT gen_random_uuid(),
    full_name      TEXT            NOT NULL,
    email          TEXT            NOT NULL,
    is_active      BOOLEAN         NOT NULL DEFAULT TRUE,
    created_at     TIMESTAMPTZ     NOT NULL DEFAULT NOW(),
    metadata       JSONB,
    CONSTRAINT uq_customers_email UNIQUE (email)
);

-- B-tree index on email (the UNIQUE constraint creates one, but explicit for clarity)
-- This index is redundant given the unique constraint above; shown for illustration
-- CREATE INDEX idx_customers_email ON customers (email);

-- GIN index on JSONB for arbitrary key lookups
CREATE INDEX idx_customers_metadata ON customers USING GIN (metadata);

Now write a query that:

  1. Returns all active customers created in the last 30 days
  2. Extracts a hypothetical metadata->>'plan' field
  3. Orders by created_at descending with a limit of 25
SELECT
    customer_id,
    full_name,
    email,
    metadata->>'plan' AS plan,
    created_at
FROM customers
WHERE is_active = TRUE
  AND created_at >= NOW() - INTERVAL '30 days'
ORDER BY created_at DESC
LIMIT 25;

Run EXPLAIN ANALYZE on that query. Note whether the planner uses the GIN index or the unique index on email. Try adding WHERE metadata @> '{"plan": "pro"}' and observe whether the GIN index is used.


Quick Reference

-- Connection
psql postgresql://user:pass@host:5432/dbname

-- List objects
\l          -- databases
\c dbname   -- connect to database
\dt         -- tables
\d table    -- describe table
\dn         -- schemas
\df         -- functions
\x          -- toggle expanded output

-- Data types
INTEGER / BIGINT / SMALLINT
TEXT / VARCHAR(n)
BOOLEAN                          -- TRUE / FALSE
NUMERIC(p,s)
UUID                             -- gen_random_uuid()
TIMESTAMPTZ                      -- NOW(), CURRENT_TIMESTAMP
JSONB                            -- attributes->>'key', attributes @> '{}'

-- Auto-increment
id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY
id SERIAL PRIMARY KEY            -- older style, still valid

-- String ops
'a' || 'b'                       -- concatenation (not +)
LOWER(col) / UPPER(col)
col ILIKE '%pattern%'            -- case-insensitive LIKE

-- JSON
col->>'key'                      -- extract as text
col->'key'                       -- extract as JSONB
col @> '{"key": "val"}'::jsonb   -- containment check (uses GIN index)
col || '{"newkey": 1}'::jsonb    -- merge/update
col - 'key'                      -- remove key

-- Limits and pagination
SELECT ... LIMIT 10 OFFSET 20;

-- Performance
EXPLAIN ANALYZE SELECT ...;
SELECT * FROM pg_stat_activity WHERE state != 'idle';
SELECT pg_terminate_backend(pid);

-- Timestamps
NOW()                            -- current timestamptz
CURRENT_TIMESTAMP                -- same
NOW() - INTERVAL '30 days'      -- 30 days ago
EXTRACT(YEAR FROM created_at)   -- year component
DATE_TRUNC('month', created_at) -- truncate to month

-- Full-text
to_tsvector('english', text_col)
to_tsquery('english', 'word & other')
tsvector @@ tsquery              -- matches?
ts_rank(tsvector, tsquery)       -- relevance score

Further Reading