The PostgreSQL Pointcloud Writer allows you to write to PostgreSQL database that have the PostgreSQL Pointcloud extension enabled. The Pointcloud extension stores point cloud data in tables that contain rows of patches. Each patch in turn contains a large number of spatially nearby points.

While you can theoretically store the contents of a whole file of points in a single patch, it is more practical to store a table full of smaller patches, where the patches are under the PostgreSQL page size (8kb). For most LIDAR data, this practically means a patch size of between 400 and 600 points.

In order to create patches of the right size, the Pointcloud writer should be preceded in the pipeline file by filters.chipper.

The pgpointcloud format does not support WKT spatial reference specifications. A subset of spatial references can be stored by using the ‘srid’ option, which allows storage of an EPSG code that covers many common spatial references. PDAL makes no attempt to reproject data to your specified srid. Use filters.reprojection for this purpose.

Dynamic Plugin

This stage requires a dynamic plugin to operate


        "connection":"host='localhost' dbname='lidar' user='pramsey'",



PostgreSQL connection string. In the form “host=hostname dbname=database user=username password=pw port=5432” [Required]


Database table to write to. [Required]


Database schema to write to. [Default: “public”]


Table column to put patches into. [Default: “pa”]


Patch compression type to use. [Default: “”dimensional””]

  • none applies no compression

  • dimensional applies dynamic compression to each dimension separately

  • lazperf applies a “laz” compression (using the laz-perf library in PostgreSQL Pointcloud)


To drop the table before writing set to ‘true’. To append to the table set to ‘false’. [Default: false]


Spatial reference ID (relative to the spatial_ref_sys table in PostGIS) to store with the point cloud schema. [Default: 4326]


An optional existing PCID to use for the point cloud schema. If specified, the schema must be present. If not specified, a match will still be looked for, or a new schema will be inserted. [Default: 0]


SQL to execute before running the translation. If the value references a file, the file is read and any SQL inside is executed. Otherwise the value is executed as SQL itself. [Optional]


SQL to execute after running the translation. If the value references a file, the file is read and any SQL inside is executed. Otherwise the value is executed as SQL itself. [Optional]

scale_x, scale_y, scale_z / offset_x, offset_y, offset_z

If ANY of these options are specified the X, Y and Z dimensions are adjusted by subtracting the offset and then dividing the values by the specified scaling factor before being written as 32-bit integers (as opposed to double precision values). If any of these options is specified, unspecified scale_<x,y,x> options are given the value of 1.0 and unspecified offset_<x,y,z> are given the value of 0.0.


If specified, limits the dimensions written for each point. Dimensions are listed by name and separated by commas.


An expression that limits points passed to a writer. Points that don’t pass the expression skip the stage but are available to subsequent stages in a pipeline. [Default: no filtering]


A strategy for merging points skipped by a ‘where’ option when running in standard mode. If true, the skipped points are added to the first point view returned by the skipped filter. If false, skipped points are placed in their own point view. If auto, skipped points are merged into the returned point view provided that only one point view is returned and it has the same point count as it did when the writer was run. [Default: auto]