“When microseconds matter, Levyx’s Xenon accelerated computing platform running on Vexata’s accelerated enterprise storage offers a compelling, high-performance solution for backtesting and risk analytics.”
Zahid Hussain
CEO of Vexata
|
“By combining Levyx’s high-performance Helium™ key-value store with Samsung’s ultra-low latency Z-SSD™, we have demonstrated performance improvements of up to 10X over the conventional approaches of processing large-scale datasets.”
Bob Brennan
Senior Vice President, Samsung
|
“Helium performance is orders of magnitude (~5x) improved over RocksDB during ingestion. This can be attributed to the lockless architecture, which scales with the number of threads. Helium has better numbers for performance, run-time, latency, and host write-amplification when compared to RocksDB.”
Younes Ben Brahim
Product Manager, NetApp
|
“Levyx was able to obtain 850 percent faster performance on financial backtesting using FPGAs.”
Dan McNamara
Corporate vice president and general manager of the Programmable Solutions Group (PSG) at Intel Corporation
|
“Our product integrating Helium is gaining significant traction and we are pleased by the customer reactions to the performance levels Levyx helps us achieve. The partnership is a real win-win.”
Edouard Alligand
CEO of QuasarDB
|
“Serisys appreciates complementary partners like Levyx that help us elevate our game. Levyx's technology is has the potential to help us disrupt the conventional means of processing large-scale data in the financial markets, we are glad they are a partner.”
Tim Marsh
CEO of Serisys
|
“[We] can achieve multi-million ingests on commodity servers while persisting the data. Due to the design of our connector and Xenon, the bottleneck here is not the software.”
Confluent Blog
Post dated May 31, 2018
|
Featured Media Content
Featured Videos
Learn how Levyx optimizes the data center infrastructure and saves the cost.
- Real-time persistent computing for Big Data
- IO Bottleneck Breakthrough with Levyx and Vexata
Featured Audio
Listen in on the Levyx team's talks on Intel's podcast "Conversations in the Cloud" to hear about how Levyx's data store technology is getting applied in real-world use cases.
For over a decade, the innovations in storage hardware and data management software progressed in silos.
Levyx bridges these silos.
Agnostic system software originally designed for optimal use of solid-state storage and multi-core CPU’s, Levyx software-defined data processor solutions allow customers to fully realize the business benefits of advances in both storage hardware AND Big Data compute software.
Welcome to the Era of Persistent Dataframes
For the first time, all real-time working datasets are:
- Persisted on Flash
- Accessible by multiple users
- Replicated for high availability
- Subject to real-time updates and transactions
Enabling Analytics on Flash
Applications / Solutions
Where we sit in the I/O path
Value Proposition
Levyx's software enables input/output (I/O) intensive legacy and Big Data applications to operate in a way that is faster, simpler and cheaper.
- Faster (by over 10x) than other solutions because of its multi-core, flash-optimized, query pushdown, and patended indexing design.
- Simpler than other architectures, which make trade-offs between performance and storage tiering complexity - all data is persisted by Levyx at memory speeds.
- Cheaper because Levyx subsitutes random access memory (RAM) with less costly Flash storage (typically 10x cheaper per GB), yet achieves equivalent or greater performance using drastically fewer distributed commodity server nodes.
Technology Highlights
- One of the World's First Flash-resident Compute Engines
- High-performance, patent-pending analytics platform sits directly on Flash SSDs
- No longer bound to DRAM for “In-Memory” performance
- One of the World’s Fastest Key Values
- Benchmarked against many of the world’s fastest KVS technologies
- Typically orders of magnitude faster than the rest
- Distributed Storage Class MemoryTM at Scale
- Patent-pending SW abstraction (virtualization) of storage-class memory (SCM)
- Distributes dataset on large-scale clusters (i.e., not just a single node or device)
- New Indexing technologies
- Built from the ground up and patent-pending
- Go well beyond LSM- and B- tree schemes (textbook methods) not designed for modern hardware
- HW-SW Parallelism
- Exploits full benefit of multi-core processors in distributed systems
- Optimizes bandwidth of shared resources (like storage) pushing limits to physical boundaries
- Just-in-Time-Compilation (JITC)
- Built-into distributed architecture
- Accomplishes query and aggregation offload/acceleration
- Flash Optimized
- Optimizes Flash properties making SSDs viable in real-time Big Data applications
- Architected to utilize all the SSD bandwidth
- This will be replaced with the clicked content.
Ease of Use
Deploy Anywhere
- On-premises
- In the cloud
- On containerized storage
In Any Environment
- OS-agnostic
- Runs on commodity or custom hardware
- Optimized for any form of NVM SSDs - Flash, Storage Class Memories