Information
Docs Guides Components Download Blog Support Observability Pipelines Toggle dark mode Twitter icon and link GitHub icon and link Chat icon and link RSS icon and link Updates Open navbar dropdown menu Twitter icon and link GitHub icon and link Chat icon and link RSS icon and link Updates New Vector version 0.45.0 0.45.0 Quickstart Vector Remap Language Components Ultra fast and reliable End to end Unified Vendor neutral Programmable transforms Clear guarantees Redacted Datadog Agent logs to Datadog Kafka topic to Elasticsearch Kubernetes logs to AWS S3 Splunk HEC to Datadog /etc/vector/vector.yaml sources: sources: sources : datadog_agent: datadog_agent: datadog_agent : type: "datadog_agent" type: "datadog_agent" type : "datadog_agent" address: "0.0.0.0:80" address: "0.0.0.0:80" address : "0.0.0.0:80" transforms: transforms: transforms : remove_sensitive_user_info: remove_sensitive_user_info: remove_sensitive_user_info : type: "remap" type: "remap" type : "remap" inputs: ["datadog_agent"] inputs: ["datadog_agent"] inputs : [ "datadog_agent" ] source: | source: | source : | redact(., filters: ["us_social_security_number"]) redact(., filters: ["us_social_security_number"]) redact ( ., filters : [ "us_social_security_number" ] ) sinks: sinks: sinks : datadog_backend: datadog_backend: datadog_backend : type: "datadog_logs" type: "datadog_logs" type : "datadog_logs" inputs: ["remove_sensitive_user_info"] inputs: ["remove_sensitive_user_info"] inputs : [ "remove_sensitive_user_info" ] default_api_key: "${DATADOG_API_KEY}" default_api_key: "${DATADOG_API_KEY}" default_api_key : "${DATADOG_API_KEY}" sources: sources: sources : kafka_in: kafka_in: kafka_in : type: "kafka" type: "kafka" type : "kafka" bootstrap_servers: "10.14.22.123:9092,10.14.23.332:9092" bootstrap_servers: "10.14.22.123:9092,10.14.23.332:9092" bootstrap_servers : "10.14.22.123:9092,10.14.23.332:9092" group_id: "vector-logs" group_id: "vector-logs" group_id : "vector-logs" key_field: "message" key_field: "message" key_field : "message" topics: ["logs-*"] topics: ["logs-*"] topics : [ "logs-*" ] transforms: transforms: transforms : json_parse: json_parse: json_parse : type: "remap" type: "remap" type : "remap" inputs: ["kafka_in"] inputs: ["kafka_in"] inputs : [ "kafka_in" ] source: | source: | source : | parsed, err = parse_json(.message) parsed, err = parse_json(.message) parsed , err = parse_json ( . message ) if err != null { if err != null { if err ! = null { log(err, level: "error") log(err, level: "error") log ( err , level : "error" ) } } } . |= object(parsed) ?? {} . |= object(parsed) ?? {} . | = object ( parsed ) ?? {} sinks: sinks: sinks : elasticsearch_out: elasticsearch_out: elasticsearch_out : type: "elasticsearch" type: "elasticsearch" type : "elasticsearch" inputs: ["json_parse"] inputs: ["json_parse"] inputs : [ "json_parse" ] endpoint: "http://10.24.32.122:9000" endpoint: "http://10.24.32.122:9000" endpoint : "http://10.24.32.122:9000" index: "logs-via-kafka" index: "logs-via-kafka" index : "logs-via-kafka" sources: sources: sources : k8s_in: k8s_in: k8s_in : type: "kubernetes_logs" type: "kubernetes_logs" type : "kubernetes_logs" sinks: sinks: sinks : aws_s3_out: aws_s3_out: aws_s3_out : type: "aws_s3" type: "aws_s3" type : "aws_s3" inputs: ["k8s_in"] inputs: ["k8s_in"] inputs : [ "k8s_in" ] bucket: "k8s-logs" bucket: "k8s-logs" bucket : "k8s-logs" region: "us-east-1" region: "us-east-1" region : "us-east-1" compression: "gzip" compression: "gzip" compression : "gzip" encoding: encoding: encoding : codec: "json" codec: "json" codec : "json" sources: sources: sources : splunk_hec_in: splunk_hec_in: splunk_hec_in : type: "splunk_hec" type: "splunk_hec" type : "splunk_hec" address: "0.0.0.0:8080" address: "0.0.0.0:8080" address : "0.0.0.0:8080" token: "${SPLUNK_HEC_TOKEN}" token: "${SPLUNK_HEC_TOKEN}" token : "${SPLUNK_HEC_TOKEN}" sinks: sinks: sinks : datadog_out: datadog_out: datadog_out : type: "datadog_logs" type: "datadog_logs" type : "datadog_logs" inputs: ["splunk_hec_in"] inputs: ["splunk_hec_in"] inputs : [ "splunk_hec_in" ] default_api_key: "${DATADOG_API_KEY}" default_api_key: "${DATADOG_API_KEY}" default_api_key : "${DATADOG_API_KEY}" Configuration examples are in YAML but Vector also supports TOML and JSON Single binary X86_64, ARM64/v7 No runtime Memory safe curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash '=https' | curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash -s -- -y curl --proto '=https' --tlsv1.2 -sSfL https://sh.vector.dev | bash -s -- -y '=https' | Platforms Package managers Operating systems Manual 43sources 43 14transforms 14 58sinks 58 13k+ GitHub stars 300+ Contributors 30m+ Downloads 40 Countries × Twitter icon and link GitHub icon and link Chat icon and link RSS icon and link A lightweight, ultra-fast tool for building observability pipelines Collect, transform, and route all your logs and metrics with one simple tool. Deploy Vector in a variety of roles to suit your use case.Get data from point A to point B without patching tools together. Learn more about the distributed deployment topology for Vector Learn more about the centralized deployment topology for Vector Learn more about the stream-based deployment topology for Vector A simple, composable format enables you to build flexible pipelines Packaged as a single binary. No dependencies, no runtime, and memory safe. Install with a one-liner: Or choose your preferred method: A wide range of sources, transforms, and sinks to choose from Sign up to receive emails on the latest Vector content and new releases Thank you for joining our Updates Newsletter © 2025 Datadog, Inc. All rights reserved.