“Integration of data is quick, strong, and centralized.”
Use Cases and Deployment Scope
We use Data Virtuality, a fantastic concept that simplifies
mapping data structures to relational tables. It's brilliant to use a few lines
of SQL to connect to SFTP Server, query new JSON files, convert them to XML,
and map all acquired data to the relational model for the warehouse. About 20
lines of clean, well-structured SQL code are needed. There are pipes in place
for each potential data source. Pipes was the only option we found that met our
requirements, as it was both reasonably priced and included an integrated SQL
editor, making ETL a breeze. I like data replication and stored procedures
language (more than T-SQL)—customer support is excellent. If there's a bug,
they fix it quickly and send us the patch. Thus, I have a fair opinion of the
Data Virtuality Platform.
Pros
- ETL/ELT model building is flexible.
- The pipes' setup and use are intuitive.
- It lets us access various data sources, extract, and analyze data.
Cons
- Configuring data connectors can be tricky.
- Price could be more flexible and adapt to the use case.
- The ETL builder's user interface could be prettier.
Likelihood to Recommend
Data Virtuality Platform's best and most unique feature is
that it is SQL-based, giving us flexibility when working with our data that
other marketing integration pipeline tools couldn't provide. Our main benefits
are the short time it takes to connect to our data sources and the flexibility
of the virtual SQL layer to meet our end users' data needs. It paves the way
for us to tap into various data repositories, extract the data contained
within, and examine it. It allows us to access information, generate actionable
reports, and make data-driven decisions. However, inadequate data governance
rules and a complex configuration process make data connectors challenging.
