Learn about pipelines limits.
Overview
BetaIngest real time data streams and load into R2, using Cloudflare Pipelines.
Cloudflare Pipelines lets you ingest high volumes of real time data, without managing any infrastructure. A single pipeline can ingest up to 100 MB of data per second. Ingested data is automatically batched, written to output files, and delivered to an R2 bucket in your account. You can use Pipelines to build a data lake of clickstream data, or to archive logs from a service.
You can setup a pipeline to ingest data via HTTP, and deliver output to R2, with a single command:
npx wrangler pipelines create my-pipeline --r2-bucket my-r2-bucket
๐ Authorizing R2 bucket "my-r2-bucket"๐ Creating pipeline named "my-pipeline"โ
Successfully created pipeline my-pipeline with ID 0adedf710c20401x74380236286a8b9c
You can now send data to your pipeline with: curl "https://0adedf710c20401x74380236286a8b9c.pipelines.cloudflare.com/" -d '[{ "foo":"bar }]'
Refer to the getting started guide to start building with pipelines.
HTTP as a source
Each pipeline generates a globally scalable HTTP endpoint, which supports authentication and CORS settings.
Customize output settings
Define batch sizes and enable compression to generate output files that are efficient to query.
Cloudflare R2 Object Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
Cloudflare Workers allows developers to build serverless applications and deploy instantly across the globe for exceptional performance, reliability, and scale.
Follow @CloudflareDev on Twitter to learn about product announcements, and what is new in Cloudflare Workers.
Connect with the Workers community on Discord to ask questions, show what you are building, and discuss the platform with other developers.