We’re excited to meet you at the Snowflake Summit this year! Come visit our booth to win some exciting giveaways, meet our team and learn how pre and post-data transformation shrinks usage costs!

Meet Us @ Snowflake 2022

Data transformers

Built for performance, designed for humans

Try it now


Datorios offers a variety of correlation transformers, associating events with data sources and values


Enrich your data by correlating the different event values to indexed data, for example, enrichment of user data from a database or social media data.

Multi-source joining

Correlate data from
un-synced data sources by predefined conditions. Correlation can be executed with full, partial, and mandatory correlations between the different sources


Events will pass through the correlator if they match (or don’t match) a logical condition of its values to a specific reference

Code capsules

Code capsules are triggered by a script or code that uses predefined values to create new events

  • Can be used to seamlessly embed calculations, algorithms & ML models into any pipeline
  • Like all Datorios transformers, Code capsules leverage autoscaling and monitoring to support any workload and provide better observability


The distributor is a multi-filter that distributes events matching a predefined condition
  • Feeds sub-pipelines/pipeline branches
  • Dynamically routing events to “first match” sub-pipelines or to all sub-pipelines that meet a certain condition, for example, feeding different business logic pipelines depending on their values
  • Log and record pipelines errors


The mapper enables an easy transformation of fields (headers & content) to match any schema prior to loading it to the target destination, or for standardizing data for correlation

  • Regulate keys names and values, build conditional values, and insert metadata to the event
  • Regulate data at the beginning of the pipeline and before loading it to the destination with a needed schema
  • Any key name can be changed to a friendly name/schema name. Any value can be re-calculated using its own value and other event values (including metadata values)
  • Values can be determined according to predefined conditions, and new keys/values can be created using conditions/calculations of the event data and metadata
  • Any field can be omitted from the event, and metadata (or functions of the metadata) can be published to be part of the event


Aggregators are used to store events for correlations and the aggregated calculations of events

  • Event aggregation – Storing events waiting to be correlated
    For more efficient correlations, specific keys can be indexed
  • Aggregated events can serve multiple correlators – the events can be purged after a correlation has been achieved, after a defined period of time, or once a storage limit has been reached (FIFO)

Start building data pipelines today

Try it now