Wednesday, December 17, 2025

Plakar to simplify and boost data protection at any scale

Plakar joined The IT Press Tour last week in Athens, Greece, to introduce its approach of data protection at any scale as it works for any configuration.

Plakar positions itself as a foundational layer for modern data resilience, addressing what it describes as a growing “resilience deadlock” caused by ransomware, cloud complexity, AI-driven attacks, and fragmented backup ecosystems. Founded in France and backed by €3M in funding (Seedcamp and prominent tech founders), Plakar combines an open-source core with an enterprise-grade control plane to redefine how organizations protect, store, and restore data across environments.


The company argues that data loss incidents are accelerating due to a convergence of threats: ransomware, cloud misconfiguration, SaaS and AI sprawl, insider risks, supply-chain attacks, and infrastructure failures. At the same time, security budgets grow far more slowly than attack surfaces, making prevention alone insufficient. In this context, Plakar frames backup and rapid recovery as the “last line of defense” when all other controls fail.

Plakar’s central thesis is that today’s backup market is structurally broken. Proprietary formats, vendor lock-in, and incompatible tools create an illusion of safety, while real recovery often fails after attacks. Existing architectures force trade-offs between encryption and efficiency (deduplication), or between security and interoperability. Plakar proposes solving this through an Open Resilience Standard built on transparency, auditability, and zero-trust principles.


At the technical core is the Plakar agent, which packages data from filesystems, object storage, SaaS platforms, and databases into self-contained, portable units called Klosets. These Klosets are encrypted end-to-end, immutable, verifiable, deduplicated, compressed, and fully browsable and searchable. Plakar likens Klosets to what containers did for compute: a standardized, portable abstraction that decouples data protection from infrastructure. Data can be backed up anywhere, stored anywhere (cloud, NAS, tape), and restored everywhere—without proprietary dependencies.

A key differentiator is the Plakar Vault Storage Protocol, which enables “trustless delegation.” Data is encrypted client-side, and encryption keys never leave the customer’s control. Cloud providers or MSPs can manage storage tiers, replication, and SLAs on opaque data blobs, enabling resilience-as-a-service without exposing sensitive data or creating centralized attack targets. This approach aims to reconcile compliance, sovereignty, and cost optimization.


Plakar recently announced Plakar Enterprise, a unified backup posture management platform delivered as a virtual appliance. It adds role-based access control, multi-user management, secret management integration, orchestration, SLA monitoring, compliance reporting, and centralized visibility across on-prem, cloud, and SaaS assets. The open-source community edition remains free, with guarantees against vendor lock-in: restores are always possible without a license.


Overall, Plakar positions itself not merely as another backup product, but as an open ecosystem and emerging standard for data resilience - designed for an era where recovery certainty, not just protection, determines business survival.

A fresh approach for a domain very often ignored or mis-addressed by companies and we expect a similar trajectory liek with see from famous usual suspects.

Share:

Monday, December 15, 2025

HyperBunker, a new dimension in cyber attack

HyperBunker, a newcomer in the data protection space, joined The IT Press Tour last week in Athens, Greece.

The company develops a hardware-based data protection solution designed to address one of the most critical failures in modern cybersecurity: the inability to reliably recover data after a ransomware attack. Built on insights from more than 50,000 real-world data recovery cases over 25 years, HyperBunker is positioned as a last-resort resilience layer for organizations whose connected defenses and cloud-based backups have already failed.


HyperBunker’s mission is to make recovery certain when everything else breaks down. Its vision is to establish a global standard for offline resilience, based on the principle that attackers can only compromise what they can reach. As ransomware attacks increase in scale, speed, and sophistication—accelerated further by AI-driven intrusion techniques—the presentation argues that traditional, credential-based and cloud-connected security models have become fundamentally unreliable. Industry data shows that most attacks remain undetected, that full domain compromise is often trivial, and that even well-funded security stacks frequently fail to stop modern ransomware variants.

The core problem HyperBunker addresses is not prevention, but guaranteed recovery. Organizations depend on a small set of trust-critical data—identity systems, financial records, operational configurations, regulatory archives, and customer or partner data. If these datasets are lost or corrupted, business continuity collapses regardless of how advanced other IT systems may be. HyperBunker is designed specifically to protect this “data that keeps organizations alive.”

Technically, HyperBunker is a fully offline, physical data vault. It has no credentials, no cloud APIs, and no external connectivity, making it unreachable by attackers. Data enters the system through a patented “butlering” unit that acts as a controlled airlock, enforcing double physical air-gapping between connected environments and the offline vault. Once inside, data is stored as immutable copies, with the most recent versions always preserved. Because the vault is never online, it is inherently resistant to ransomware, insider threats, credential theft, and even future quantum-based attacks.

HyperBunker is deliberately hardware-based, rejecting software-only or “logical air-gap” approaches that remain accessible through networks, credentials, or misconfigurations. The presentation contrasts this with cloud and software-defined backup systems, which may claim immutability or air-gapping but still expose attack paths. HyperBunker’s philosophy is simple: if attackers cannot see or reach the system, they cannot compromise it.

The solution is targeted at essential and highly regulated industries, including critical infrastructure, finance, healthcare, energy, manufacturing, and government. In these environments, downtime is not merely an IT inconvenience but a regulatory, safety, and operational failure. Validation includes more than 80 technical demonstrations, strong interest from insurers—most notably a listing by U.S. cyber insurer Cowbell—and early discussions with defense innovation organizations, all reinforcing the value of true offline recovery.


HyperBunker is delivered as Hardware-as-a-Service through a subscription model that includes the device, support, SLAs, and regular restore testing. This approach provides predictable costs while avoiding the unpredictable, often catastrophic financial impact of ransomware incidents. The company is backed by venture capital, has already delivered its first production units, and is scaling manufacturing and partner networks across Europe and beyond.

Overall, HyperBunker presents itself not as another cybersecurity tool, but as a governance-grade resilience layer—a final, untouchable vault that ensures organizations can recover when all connected systems fail.

We'll see how the company will penetrate the market in the coming months.

Share:

Friday, December 12, 2025

9LivesData continues the original NEC HydraStor product

9LivesData, a Polish storage company founded by veterans of large-scale enterprise storage R&D, participates this week to the 65th IT Press Tour in Athens, Greece. 

The firm introduced its flagship product high9stor, a TCO-optimized, scale-out enterprise secondary storage platform designed for backup and archival workloads. Led by CEO Cezary Dubnicki, formerly Head of Storage at NEC Labs Princeton, the company builds on more than 16 years of real-world experience delivering and supporting NEC HYDRAstor, one of the earliest and most scalable global-deduplication backup systems deployed at exabyte scale without data loss.


9LivesData positions high9stor squarely at the intersection of exponential data growth and rising infrastructure costs. The company targets the enterprise secondary storage market, where backup volumes continue to expand while organizations struggle with shrinking backup windows, slow restores, ransomware threats, and escalating total cost of ownership. The central promise of high9stor is to reduce TCO by around 20% today, with a roadmap toward 30% savings, without compromising performance, resiliency, or availability.

high9stor is a software-defined, scale-out backup storage system built on commodity hardware. Using dense 1U nodes with up to 240 TB per rack unit, it scales linearly to roughly 180 nodes and more than 40 PB of raw capacity. Capacity and performance grow together as nodes are added, avoiding the bottlenecks typical of scale-up architectures. The system employs inline global deduplication and compression, significantly reducing stored data volumes while also accelerating backup ingestion.

A core architectural differentiator is the use of distributed, multi-controller algorithms. Background operations such as space reclamation, rebalancing, and integrity checks are executed in parallel across all nodes, rather than by a single controller. This allows high9stor to reclaim capacity in hours instead of weeks, even at very high utilization levels. The platform is designed for non-stop operation, supporting online expansion, hardware refresh, and up to three generations of nodes in a single cluster, eliminating forklift upgrades.

High availability and durability are achieved through erasure coding, allowing the system to tolerate multiple disk or node failures with far lower capacity overhead than traditional replication. Integrated WORM (write-once, read-many) functionality, combined with object lock support and tight integration with leading backup applications, provides strong protection against ransomware and accidental deletion. WAN-optimized, dedup-aware replication enables efficient disaster recovery across sites.


Compatibility with existing backup ecosystems is a key focus. high9stor supports standard interfaces such as NFS, CIFS, and S3, as well as deep integrations with major backup vendors including Cohesity NetBackup (OST), Veeam, Commvault, and Nakivo. This allows enterprises to consolidate multiple backup streams into a single global deduplication pool while preserving application-specific optimizations.

From a business perspective, high9stor is sold as a software subscription priced by raw capacity per month, making costs transparent and predictable. The company targets large enterprises, financial institutions, telecoms, utilities, healthcare, media, and government organizations, with a particular focus on EMEA and Central Asia. Real-world case studies, including large financial institutions operating hundreds of nodes across multiple data centers, underline the platform’s maturity and operational stability.

Overall, 9LivesData presents high9stor as a next-generation backup storage platform that combines proven architectural principles, modern scale-out design, and aggressive TCO optimization—positioning it as a compelling alternative to traditional backup appliances and legacy scale-up systems in an era of relentless data growth.

We'll make some checks at different to measure progress of the team and product on the market in the coming months.

Share:

Wednesday, December 10, 2025

Ewigbyte, new project to preserve data over long term on glass

Ewigbyte joined The IT Press Tour this week in Athens, Greece. The company introduces its vision for a new paradigm in cold data storage-one that is secure, sovereign, and environmentally sustainable. The company argues that the digital age, driven by explosive data growth, artificial intelligence, and rising energy constraints, has reached a breaking point where traditional storage technologies can no longer scale economically or sustainably. Cold data—information that must be retained for long periods but is rarely accessed - has become the critical bottleneck in global data infrastructure.


ewigbyte frames the challenge through powerful macro trends. Data volumes are growing faster than enterprise storage production capacity, creating a projected supply gap of roughly 15 zettabytes by 2030, or around 50% shortfall. At the same time, storage costs are rising sharply, with hard drives and SSDs experiencing double-digit annual price increases. The industry’s dependence on HDDs, SSDs, and magnetic tape - technologies that are decades old and prone to failure, degradation, and environmental risk - makes the current trajectory unsustainable in terms of energy use, electronic waste, water consumption, and CO₂ emissions.

The company’s core proposition is to rethink cold storage from first principles. Instead of optimizing for density and write speed, ewigbyte prioritizes durability, security, and minimal climate impact. Its solution is based on writing data directly onto glass using photonic technology. Data blocks are "burned" into ultra-thin glass media with ultra-short pulse UV lasers and structured light modulators, without toxic coatings. The result is a write-once, immutable storage medium designed to last more than 10,000 years, resistant to heat, humidity, electromagnetic pulses, radiation, and cyber threats such as ransomware. Because the stored data requires no power to maintain, its operational climate impact is effectively zero.

ewigbyte emphasizes that glass ablation is not experimental science but a proven industrial process already used in other manufacturing contexts. The company’s innovation lies in its modified optical system and its ability to integrate laser writing, robotics, and physical data warehousing into a scalable storage service. Rather than conventional data centers, ewigbyte envisions physical data warehouses where glass-based storage cubes are catalogued, stored securely, and retrieved when needed.


From a market perspective, ewigbyte positions itself as a long-term successor to tape and HDD-based archives. As SSDs dominate hot and warm data tiers, cold data will increasingly migrate to optical and photonic media. The company outlines a staged go-to-market approach, beginning with paid pilot projects focused on WORN (write once, read never) use cases, followed by WORM archival data centers, and eventually scaling toward broader cold and warm data services as economies of scale are achieved. Key applications include compliance archives, backups, hyperscaler archives, and data sets with low read frequency but strict durability and sovereignty requirements.

The roadmap projects an MVP in 2026, the demonstration of the first dedicated archival data center by 2028, and large-scale operational facilities by 2029. Supported by an experienced founding team with legal, technical, and industry expertise, ewigbyte presents itself as a foundational technology company aiming to redefine how humanity preserves data for centuries - shifting cold storage from an energy-hungry liability into a permanent, sustainable asset.

We'll monitor carefully the progress made by the team as it is a key European initiative.

Share:

Thursday, November 27, 2025

65th Edition of The IT Press Tour in Athens, Greece

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 65th edition scheduled December 9 and 10 in Athens, Greece.

During this edition, the press group will meet:
  • 9LivesData, the developer of NEC Hydrastor, introducing a new product named high9stor compatible with HYDRAstor,
  • Enakta Labs, a recognized expert team on DAOS,
  • Ewigbyte, an innovator around long-term data preservation on glass,
  • HyperBunker, an interesting dual air-gap model,
  • Plakar, a fast growing backup and recovery open source software,
  • and Severalnines, a reference in DBaaS.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Tuesday, November 11, 2025

Recap of the 63rd IT Press Tour in Amsterdam, Netherlands

Initially posted on StorageNewsletter November 6, 2025

The 63rd edition of The IT Press Tour took place in Amsterdam a few weeks ago in September. The event constituted an effective forum through which the press group and participating organizations engaged in extensive dialogue on IT infrastructure, cloud computing, networking, cybersecurity, data management and storage, big data and analytics, as well as the overarching integration of AI in these fields. Six organizations participated in the tour, listed here in alphabetical order: CompressionX, DDP/Ardis, EuroNAS, OpenMP, Oxibox and Stackable.

CompressionX is a modern file compression company designed to address the rapidly growing environmental and operational challenges associated with global data storage. As data centers now consume around 3% of the world’s electricity, require massive cooling systems, and contribute significantly to CO₂ emissions, the demand for more efficient data handling has never been greater. CompressionX’s mission is to “save the planet, one file at a time” by reducing the storage footprint of digital information without compromising data integrity.

Founded in 2012, the company began with a mathematical insight that led to the development of its core compression algorithm. After years of refinement and automation, the beta product launched in 2025. CompressionX focuses on delivering a clean, intuitive user experience that eliminates the frustrations common in legacy compression tools, such as slow processing speeds, clunky interfaces, compatibility gaps, and confusing security settings. Its solution provides one-click compression and extraction, secure-by-default encryption, transparent pricing, and seamless cross-platform performance.

The technology uses an intelligent algorithm designed to analyze data and determine the optimal compression strategy in a single pass. This makes it especially effective for large or complex datasets. Key use cases include high-fidelity audio streaming, LiDAR and sensor data transmission, aviation data management, and large-scale IoT device ecosystems—all environments where efficient, fast, and secure data handling is essential.

CompressionX differentiates itself through modern architecture, sustainability as a core value, and a strong user-centric approach. The product roadmap includes multi-threaded performance boosts, customizable compression settings for professionals, automated “cold data” detection, cloud integration, and mobile/web decompression access. Pricing ranges from a free tier for individual users to enterprise solutions with advanced controls.

To reinforce its environmental mission, CompressionX partners with sustainability projects such as methane leakage prevention, seagrass ecosystem restoration, elephant anti-poaching support, and rainwater harvesting land regeneration. Overall, CompressionX positions itself as a greener, smarter, and more efficient future for data storage and transfer.


Ardis Technologies presents its DDP shared storage solutions designed specifically for the Media & Entertainment (M&E) industry, where high bandwidth, real-time collaboration, and large unstructured video/audio files require different infrastructure than typical IT storage. The company’s core innovation is its A/V FS file system, a high-availability SAN file system built in-house to support project-based workflows, native Avid project sharing, bin locking, and folder-based access rights. Unlike standard NAS systems that rely on SMB/NFS and block-level caching, DDP uses iSCSI block I/O with file-based caching, enabling Project Caching—a key differentiator.

Project Caching allows active project data to be stored on SSD-based cache for fast access while keeping full project content on high-capacity spinning disks, giving editors “SSD performance with HDD capacity” without disruptive copying or relinking. Data can exist simultaneously in cache and on disks, enabling seamless internal data movement while projects are in use. This is critical in post-production environments where multiple editors work on the same material.

Ardis offers a wide product range: MicroDDP portable units, Hybrid DDP systems with mixed SSD/HDD packs, miniDDP all-SSD performance systems, DDP10EF NVMe-based high-throughput systems, storage expansion arrays, and Dual High-Availability DDPHead systems with redundant controllers for mission-critical workflows. Systems support Ethernet from 1GbE to 200GbE, NVMe-oF/RDMA, and Fibre Channel depending on performance needs.

Security is emphasized through on-premise workflows, controlled ingest via transfer rooms, anti-virus and checksum verification, RAID6 protection, backup strategies, and air-gapped copies – positioning DDP as safer than cloud-centric storage for sensitive media.

Overall, DDP provides scalable, high-performance shared storage purpose-built for video and film production, combining workflow speed, capacity efficiency, and strong security for studios, broadcasters, and post-production facilities.


EuroNAS is a Munich-based storage and virtualization software company founded in 2005, with development and support teams across Europe. Their mission is to make enterprise storage, high availability, and virtualization both powerful and simple, eliminating complexity and vendor lock-in. They position themselves as an alternative to costly proprietary storage appliances and difficult open-source stacks, offering enterprise capabilities with a user-friendly web interface and personal support.

EuroNAS provides several product families. euroNAS Premium is a high-performance storage OS supporting SMB, NFS, iSCSI, NVMe-oF, and snapshots, used for file servers, backup repositories, media workflows, and surveillance data. euroNAS HA Cluster delivers high-availability storage via mirrored or dual-controller shared storage configurations, ensuring continuous access even during hardware failures – ideal for business-critical data, healthcare, finance, and 24/7 environments.

For scalable deployments, eEKAS is EuroNAS’s Ceph-based scale-out system, offering unified file, block, and S3 object storage with GUI-based management and horizontal scalability to tens of nodes. It is used for video archives, research, cloud services, and large imaging datasets. EuroNAS also offers eEVOS, a hyper-converged virtualization platform combining compute, storage, and backup in one solution. It supports VM management, live migration, high availability, integrated backup, Ceph-based VSAN alternative, and multi-node expansion—positioning it as a cost-effective, simpler alternative to VMware or Proxmox.

Key strengths include freedom from hardware lock-in, intuitive GUI (no Linux expertise required), enterprise reliability, integrated high availability, and direct human support from storage experts. EuroNAS sells via OEM and channel partners like Exertis Hammer and Broadberry, and continues expanding features such as multi-tenancy, S3 support across products, and enhanced virtual networking. 

Overall, EuroNAS aims to deliver flexible, scalable, and affordable enterprise storage and virtualization without complexity or vendor dependency.


The session introduces OpenMP, an industry-standard API for parallel programming on shared-memory systems, accelerators, and heterogeneous computing architectures. Managed by the OpenMP Architecture Review Board (ARB), a non-profit organization, OpenMP provides a directive-based programming model that allows developers to express parallelism in C, C++ and Fortran while maintaining portability and high productivity. The ARB includes major hardware and software vendors who collaborate to evolve and maintain the specification, ensuring broad cross-platform support.

Originally created in 1997 to unify fragmented shared-memory programming models, OpenMP has grown significantly. Early versions focused on multi-core CPUs, but more recent releases (OpenMP 4.0 and beyond) introduced GPU and accelerator offloading, SIMD optimizations, task-based parallelism, memory hierarchy management, and support for modern C/C++ and Fortran standards. OpenMP 6.0 continues to enhance accelerator support, simplify loop transformations, and add features for asynchronous and event-driven parallelism.

OpenMP addresses major challenges in HPC, such as programming complexity on heterogeneous systems, performance portability across different architectures, memory hierarchy optimization, and support for irregular workloads. The model spans multiple levels of parallelism, from vectorization on a single core to distributed execution using hybrid models alongside MPI.

Real-world use cases include autonomous driving software optimization, COVID-19 drug discovery acceleration, quantum chemistry simulations, and turbulence modeling, demonstrating significant speedups through GPU offloading and efficient task scheduling.

Compared to other frameworks, OpenMP stands out for its simplicity, portability, and vendor support, while remaining interoperable with lower-level models such as CUDA, SYCL, MPI, and vendor-specific toolchains. Future roadmap directions include improved multi-device execution, data-flow parallelism, Python integration, and further simplification of heterogeneous programming.

Overall, OpenMP aims to make parallel computing more accessible, scalable, and efficient across CPUs, GPUs, embedded systems, and supercomputers.


Oxibox is a French cybersecurity company founded in 2014 that focuses on secure-by-design backup and cyber-resilience for businesses and public organizations, addressing the growing threat of ransomware. Traditional backup solutions are increasingly targeted by attackers: backups are often encrypted, deleted, or used to propagate attacks. Oxibox’s mission is to ensure every organization can restore operations quickly and safely, even during an active cyber incident.

The company’s core innovation is its R2V and UDP (Universal Data Protection) technology, which provides air-gapped, encrypted, and corruption-resistant backups automatically. By isolating backup environments at the filesystem level and applying behavioral analysis to detect abnormal write patterns, Oxibox prevents attackers from altering stored data. Backups can be restored instantly, with automatic testing and the ability to launch systems as cross-hypervisor virtual machines, enabling business continuity during recovery.

Oxibox is designed to be simple to deploy (about 30 minutes), compatible with all environments (cloud, on-prem, workstations, NAS, hyperconverged infrastructure), and cost-efficient compared to tape or immutable storage. It offers both on-prem appliances and cloud-based storage, with volume-based pricing and no license lock-in. The company has strong traction in the mid-market and public sector, with more than 6,000 customers and 4,000 public entities protected, and partnerships with Docaposte, Airbus Cyber, and major distributors such as EET.

Oxibox targets organizations between 100 and 1500 employees—often underserved by traditional backup vendors yet heavily targeted by ransomware. The solution ensures resilience across all maturity levels, forming the foundational layer of cybersecurity: protect, respond, remediate, and restart. The roadmap includes expanded international distribution, higher-performance 100 Gbps backup capabilities, deeper hypervisor support, and a full cyber-resilience platform.

Overall, Oxibox positions itself as the first backup solution specifically engineered for ransomware-era threats, ensuring that recovery is always possible—without compromise.


Stackable is a company founded in 2020 that provides a modular, open-source and Kubernetes-native data platform designed to help organizations build and manage modern data architectures without vendor lock-in. The platform brings together a curated suite of popular, production-proven open-source data tools—such as Apache Kafka, NiFi, Spark, Trino, Druid, HBase, Superset, ZooKeeper and Hadoop—and integrates them into a unified, consistent operating model that works on-premises, in the cloud, or in hybrid environments.

Stackable’s mission is to solve the complexity that organizations face when building their own data platforms from fragmented open-source components or relying on costly, proprietary cloud services. With Stackable, customers can maintain full data sovereignty, avoid dependency on single vendors, and keep their data within European security and compliance frameworks. The platform supports standard monitoring, logging, certificate management, authentication integration (LDAP, Kerberos, OIDC), and vulnerability management with signed SBOMs and VEX advisories, ensuring supply-chain transparency and secure operations.

A key architectural principle is “Data Platform as Code”, allowing platform configurations to be defined declaratively, deployed repeatedly, and automated using GitOps workflows. Stackable provides operators that automate cluster lifecycle management, version updates, scaling, and day-2 operations across multiple environments.

The company offers multiple service models: do-it-yourself open source, paid subscriptions with support, consulting for architecture and migration, training, and fully managed deployments (including hosted versions via IONOS). Customers include public sector, financial services, manufacturing, smart cities, and GAIA-X data space initiatives, where data sharing and trust are crucial.

Overall, Stackable delivers a flexible, secure, and scalable open-source data platform that reduces complexity, increases agility, and empowers organizations to control their data infrastructure on their own terms.

Share:

Thursday, November 06, 2025

Recap of the 64th IT Press Tour in New-York, NY, USA

Initially posted on StorageNewsletter November 13, 2025

The 64th edition of The IT Press Tour was recently organized at big apple, we mean New-York city, NY. The event served as a productive platform for the press group and participating organizations to engage in in-depth discussions on IT infrastructure, cloud computing, networking, cybersecurity, data management and storage, big data and analytics, and the broader integration of AI across these domains. Seven companies joined that edition, listed here in alphabetical order: Arcitecta, AuriStor, CTera, ExaGrid, HYCU, Shade.inc and TextQL.

Arcitecta’s presentation showcases Mediaflux as a comprehensive, unified data management platform designed to address the rapidly growing scale, complexity, and fragmentation of modern data environments. The company outlines its strategic direction, recent customer successes, and the evolution of Mediaflux as a converged system that integrates orchestration, storage, metadata, access, and AI readiness into a single fabric.

Mediaflux enables organizations to ingest, manage, move, analyze, and preserve data across on-prem systems, cloud platforms, and globally distributed sites. Its architecture features a powerful policy engine that automates data lifecycle management – from active storage through long-term archival – and supports multi-protocol access including NFS, SMB, S3, and SFTP. The platform is fully vendor-agnostic, giving customers the freedom to mix storage hardware from NetApp, Dell, IBM, cloud object stores, and tape without lock-in.

Key advanced capabilities include compute-to-data workflows, a world-class metadata engine, and Livewire WAN acceleration, which enables data transfers at up to 95% of link capacity. Mediaflux also incorporates a next-generation vector-aware metadata database (XODB) that supports semantic search, AI pipelines, RAG models, and virtual data hierarchies. Together, these capabilities allow Mediaflux to function as an AI-ready data fabric that gives applications and researchers a unified, intelligence-rich view of all enterprise data.

Customer case studies reinforce these strengths. Princeton’s TigerData program uses Mediaflux to manage 200PB of research data while building a 100-year preservation model that spans multiple tiers of storage, from high-performance compute environments to tape-based archives. Dana-Farber Cancer Institute leverages Mediaflux to unify siloed systems, automate archiving, migrate to cloud and tape, and streamline researcher workflows. TU Dresden and IWM demonstrate improvements in collaboration, discoverability, automation, cost reduction, and long-term preservation.

Arcitecta also highlights Datakamer, a growing community focused on data management best practices, with future events planned globally. The roadmap includes expanded vector database capabilities, deployment automation tools, DAMS upgrades, and enhanced stability.

Overall, Mediaflux is presented as a future-proof, AI-ready, end-to-end platform built to simplify data management, eliminate silos, accelerate collaboration, and help organizations keep pace with exponential data growth.



AuriStor, founded in 2007, is a technology-first, fully remote company that develops AuriStorFS, a next-generation, high-security, high-performance distributed file system descended from AFS and OpenAFS. After the free OpenAFS ecosystem proved unsustainable, the team pivoted to a closed-garden model, leading to major performance, reliability, and security advancements.

Their first major commercial success came in 2016, when a global financial institution licensed AuriStorFS to eliminate costly outages- eventually deploying it worldwide and across multi-cloud environments. AuriStor licenses software only; it does not host storage. Every deployment is unique, spanning finance, defense, research universities, HEP labs, and government agencies. The company maintains deep partnerships with Red Hat, SUSE/Cray, Microsoft, LINBIT, TuxCare, and others, and supports a wide range of Linux distributions, macOS, Solaris, and specialty HPC operating systems.

AuriStorFS preserves the /afs namespace and enables seamless migration from legacy AFS environments without flag days, maintaining decades-old data. Its pricing model is unusual: costs are based on servers and protection entities—not data volume, raw storage, or CPU cores—and includes a perpetual-use license.

Since 2022, AuriStor has produced more than 40 patch releases, adding support for new platforms, enhancing compatibility with Linux’s in-tree kafs module, and delivering improvements in RX RPC networking, call termination, and large-scale fileserver shutdown. These changes massively reduce latency, increase throughput (up to 450% improvement on 10+ Gbit links), and enable rapid restart of fileservers with millions of volumes.

Major enhancements include a Volume Feature Framework enabling per-volume capabilities, expanded volume dump formats, OverlayFS whiteout support, and advanced selective acknowledgements for RX. AuriStor also invests heavily in keeping pace with rapid Linux kernel changes, splitting components into GPL and non-GPL modules to remain compatible.

AuriStorFS excels at global, secure, replicated content distribution; cross-platform home/project directories; open-science collaboration; and large distributed compute farms. It is less suited for VM images or databases until future byte-range locking features are added. Use cases include SLAC’s global research workflows, USGS’s real-time hazard data distribution, and massive multinational software-distribution infrastructures scaling to 175,000+ clients, 80+ cells, and millions of volumes. 

Looking forward, AuriStor is advancing RX congestion control, Unicode directory support, deeper container-orchestration integration, and boot-from-AFS capabilities—positioning AuriStorFS for HPC, hybrid cloud, and next-generation distributed computing.



CTERA’s presentation outlines its vision for transforming enterprise data from fragmented, unstructured chaos into an intelligent, AI-ready asset through a unified, secure, globally distributed data fabric. As a leader in hybrid cloud and distributed file systems, CTERA serves large enterprises and government agencies with a software-defined platform that connects edge sites, data centers, and clouds without compromising performance or security.

The company highlights strong growth metrics – 35% annual growth, 125% net retention, and a 90% partner-driven model—alongside industry leadership recognized by GigaOm, Frost & Sullivan, and Coldago. IT leaders’ top 2025 priorities – cybersecurity, AI strategy, and data growth – frame the need for CTERA’s approach.

CTERA explains a three-wave innovation journey:
  1. Wave 1: Location Intelligence unifies silos across cloud, data center, and edge through a global namespace and object-native backend, enabling scalable hybrid cloud storage, high-performance cached edge access, and seamless NAS migration. Hybrid storage adoption is driven by efficiency, resiliency, productivity, and AI requirements.
  2. Wave 2: Metadata Intelligence turns this unified fabric into a secure data lake. Metadata analytics drive operational insight, automation, and cyberstorage capabilities. Immutable snapshots, block-level anomaly detection, and activity monitoring protect against ransomware. Newly launched products include Ransom Protect (AI-based anomaly detection), Insight (360° operational visibility), and MCP (LLM-powered natural-language file interaction).
  3. Wave 3: Enterprise Intelligence elevates the data lake into a strategic AI asset. CTERA stresses that GenAI success depends on high-quality, curated data—not simply vectorizing everything. The platform enables timely ingestion, metadata enrichment, unified formats, filtering, and secure vectorization. With a semantic retrieval layer and permission-aware controls, organizations can create “virtual employees”—AI agents operating safely on curated enterprise data.
Use cases span public sector, financial services, healthcare, retail, industrial design, and federal defense. Case studies show CTERA enabling edge processing for naval fleets and real-time global collaboration for creative agencies. A medical law firm demonstrates how CTERA’s MCP and intelligence layer accelerate document analysis with trustworthy AI.

CTERA positions its intelligent data fabric as the foundation for secure, scalable AI adoption—transforming distributed data into an enterprise’s most valuable asset.



ExaGrid presents itself as the largest independent vendor dedicated exclusively to backup storage, with 17+ years in the market, over 4,800 global customers, and strong financial performance—19 consecutive cash-positive quarters, no debt, and double-digit growth. The company holds the industry’s highest Net Promoter Score (+81) and has earned more backup-storage awards than any competitor. Its appliances are certified in 132 countries and widely deployed across government, healthcare, finance, retail, manufacturing, and enterprise IT.

ExaGrid’s message centers on its Tiered Backup Storage architecture, the only approach purpose-built for backup workloads. Unlike standard disk or inline-deduplication appliances, ExaGrid separates a high-performance landing zone (for fast ingest, fast restores, and instant VM boots) from a deduplicated, non-network-facing repository tier, creating an immutable, air-gapped backup environment. This design eliminates the performance penalties of inline dedupe, avoids rehydration during restores, and ensures a fixed-length backup window via scale-out expansion.

The market is shifting as backup software vendors (Veeam, Rubrik, Commvault, Cohesity/NetBackup) decouple storage and encourage customers to choose their own hardware, opening opportunities for ExaGrid. Customers typically reevaluate backup storage during capacity expansions, hardware refreshes, app changes, cost reduction efforts, SLA failures, or broken backup/recovery workflows.

ExaGrid emphasizes four requirement pillars: Backup & Recovery (fast ingest, data integrity, resilience, and rapid VM/database restores), Business Continuity (redundancy, security, DR), Cost of Ownership (deduplication savings, no forklift upgrades, low power/cooling, price protection), and Proactive Support (assigned L2 engineers, in-theater support, monitoring).

Security is a major differentiator. The repository tier is immutable, isolated, and protected by delayed deletes, encryption, RBAC, 2FA, TLS, and AI-powered Retention Time-Lock. ExaGrid meets DORA, GDPR, NIS2, and Common Criteria requirements. The architecture provides strong ransomware recovery, alerting on deletions and dedupe-ratio anomalies.

ExaGrid integrates deeply with leading backup applications, delivering accelerated ingest, improved dedupe ratios (up to 15:1), faster synthetic fulls, global deduplication, and 6PB full backup support in a single system. Advanced DR options span secondary data centers, colocation sites, and cloud providers (AWS, Azure) with 50:1 WAN bandwidth efficiency.

Recent announcements include support for MongoDB Ops Manager, Rubrik archive tier, TDE-encrypted SQL dedupe, and upcoming AI-powered RTL enhancements. An all-SSD appliance line arrives in late 2025, with Cohesity support in 2026.

ExaGrid positions itself as the performance-leader and security-leader in backup storage—purpose-built, cost-efficient, and resilient against ransomware, with unmatched support and scalability.




HYCU’s presentation focuses on delivering resilient recovery across SaaS, cloud, hybrid, and emerging AI workloads. With more than 4,600 organizations protected in 78 countries, HYCU highlights major advances since the previous briefing: expansion into 25+ new hypervisors, SaaS apps, and cloud services; new integrations such as DD Boost for SaaS, deeper Dell ecosystem support, and sovereign, malware-resistant data protection for more than 90 integrations.

The company frames modern resiliency challenges through rising user error, automation mistakes, insider threats, cyberattacks, and supply-chain compromises. HYCU’s 2025 State of SaaS Resilience Report shows SaaS adoption rising sharply while security incidents are widespread, with GitHub, Jira, iManage, and other platforms exposing major data-loss gaps. Mission-critical data increasingly lives in SaaS, yet most backup vendors protect only a handful of apps and rely on fragmented consoles and vendor-controlled storage.

HYCU positions itself as offering the broadest SaaS and cloud workload coverage, delivering Total Coverage across IaaS, PaaS, DBaaS, SaaS, hybrid cloud, and AI/ML workloads. HYCU R-Cloud enables app-aware discovery, granular recovery, DR, offline recovery, cloud mobility, and compliance workflows, all using customer-owned storage—ensuring sovereignty, eliminating vendor lock-in, and enabling immutable, object-locked backups.

A major theme is resiliency for SaaS and AI-powered applications. HYCU introduces SaaS disaster recovery, data seeding, and offline recovery to guarantee access even during prolonged SaaS outages or supply-chain incidents. The platform also tackles cloud-native risks: fragmentation, blind spots (DBaaS, AI/ML pipelines), and soaring object-storage and egress costs. HYCU’s Lakehouse protection provides atomic backups, cross-project recovery, immutability, and coverage for models, vectors, routines, and access policies—addressing the growing importance of cloud data lakes and AI training assets.

The R-Shield cyber-resilience suite adds anomaly detection across hybrid, SaaS, and cloud workloads; high-performance malware scanning performed at the data source (not in vendor planes); intelligent tagging; and full data sovereignty. R-Lock enforces immutable, customer-owned backups that meet 3-2-1-1-0 requirements.

HYCU concludes by emphasizing its extensible, security-first platform; customer choice in storage and architecture; and leadership validated by GigaOm, which positions HYCU R-Cloud as a Leader and Fast Mover in innovation, cross-cloud mobility, and protection of next-generation workloads.



Shade positions itself as the modern storage and workflow platform built for the exploding demands of creative production. As file sizes surge—driven by high-resolution video, global collaboration, and GenAI—creative teams are overwhelmed by slow legacy tools like Dropbox, Box, Google Drive, Frame.io, and LucidLink. Customer quotes highlight severe issues: multi-hour downloads, lost access to decades of footage, storage revocations, account shutdowns, poor Premiere performance, and a fragmented stack that forces creatives to re-upload assets across 5–7 different systems.

The presentation describes a universal pain point: every company has become a creative production company, yet creative directors now spend more time managing files than making content. Teams lack a single source of truth, frequently duplicate uploads, lose track of where files live, and suffer from inconsistent permissions, siloed review processes, and slow transfers—especially across global teams. A typical workflow involves 100+ hours of downloading, re-uploading, re-archiving, and stitching together tools like Frame.io, Air, LucidLink, Dropbox, and physical drives.

Shade proposes an integrated solution: an intelligent cloud NAS with real-time file streaming, complex previews, built-in review and markup, facial recognition, semantic search, AI autotagging, transcription, transcoding, version control, and custom metadata. The workflow becomes unified: upload once, mount via Shade Drive, edit with streaming performance, share via secure links, and distribute or archive—all from one platform.

The efficiency gains are dramatic. AI autotagging reduces 200 hours of manual logging to a minute; semantic search finds multi-year-old assets in seconds; 4K ProRes files stream instantly; multi-hundred-GB transfers complete immediately instead of requiring physical hard-drive shipments. Shade replaces fragmented stacks and cuts customer costs by 55–70% across SMB, corporate, sports, and media markets. Real customer examples show spend dropping from $80K to $25K, $170K to $70K, and $500K to $150K annually.

Testimonials from Salesforce, Lennar, TEAM, and others underscore faster workflows, accurate AI features, and the value of having one definitive content system. Looking ahead to 2026, Shade plans advanced automations, integrations, Shade Vault, and API-driven workflows that connect creative content with business systems—extending Shade from creative teams to the entire enterprise.



TestQL provides a comprehensive and automated approach to testing SQL queries, data transformations, and full data pipelines, replacing the manual, inconsistent, and error-prone methods that many organizations still rely on. As data ecosystems expand across warehouses, lakes, real-time systems, and AI workflows, SQL logic becomes increasingly complex and must remain accurate despite constant schema changes, new data sources, and rapid iteration. TestQL brings engineering-grade discipline to SQL by enabling teams to write tests for query logic, expected outputs, edge cases, performance behavior, and data quality rules, and then automate those tests within CI/CD pipelines so issues surface before they reach production.

By introducing consistency and repeatability, TestQL greatly increases trust in analytic outputs, dashboards, AI features, and business reports, reducing the risk of silent data corruption or broken transformations. It accelerates development by removing the need for analysts and engineers to manually re-run queries or validate results each time code changes. Teams gain shared visibility into test results, failures, regressions, and data-quality trends, improving collaboration between data engineering, analytics, operations, and governance groups. TestQL also enhances auditability and compliance by maintaining detailed histories of test configurations, results, version changes, and execution context, making it easier to trace how critical datasets and SQL components evolve over time.

As organizations scale to hundreds or thousands of pipelines, queries, models, and dashboards, manual testing becomes impossible. TestQL meets this challenge by orchestrating broad, automated test coverage that adapts to growing data estates and increasingly complex logic. It supports modern cloud data platforms and provides a unified structure that helps teams detect anomalies, validate assumptions, and ensure outputs remain correct even as business rules, schemas, and workloads shift. In a world where data accuracy directly impacts revenue, decision-making, and AI reliability, TestQL transforms SQL testing into a systematic, proactive, and dependable practice that strengthens the entire data lifecycle.

Share:

Thursday, October 16, 2025

ExaGrid continues to penetrate the market at a high rate

At The IT Press Tour in New York, ExaGrid positioned itself as a rare constant in a rapidly shifting backup market: a vendor entirely focused on one problem - backup storage - and profitable while doing so. After more than 17 years in the market, the company has built what it describes as the largest independent business dedicated exclusively to backup storage, serving more than 4,800 customers across 80+ countries and reporting continued double-digit growth with no debt and sustained cash positivity.


The core of ExaGrid's proposition is its tiered backup storage architecture, designed to reconcile two historically conflicting goals: fast operational recovery and cost-efficient long-term retention. Unlike inline deduplication systems, which often slow down backups and restores due to rehydration overhead, ExaGrid separates functions into two tiers. A high-performance "Landing Zone" stores recent backups in their native format for fast ingest, instant restores, and rapid VM boots, while a second, non-network-facing repository tier holds deduplicated data for long-term retention at lower cost.


Security and ransomware resilience featured prominently in the discussion. ExaGrid emphasized its use of a tiered air gap, immutability through delayed deletes, and a repository tier that is not directly network-addressable. According to the company, this design ensures that even if ransomware attempts to delete backup data—either through compromised backup software or direct access—the data remains protected and recoverable. Compliance with regulatory frameworks such as GDPR, DORA, and the EU’s NIS2 directive further underpins the platform's enterprise positioning.

Rather than offering an end-to-end backup stack, ExaGrid has aligned itself with the industry’s shift toward best-of-breed architectures. As major backup software vendors increasingly decouple software from hardware, ExaGrid integrates deeply with leading platforms including Veeam, Commvault, Veritas NetBackup, Rubrik, and others. These integrations focus on accelerating ingest, improving deduplication ratios, enabling scale-out architectures, and reducing WAN bandwidth consumption for disaster recovery scenarios.

Scalability is addressed through a true scale-out model. Additional appliances can be added incrementally, maintaining a fixed-length backup window even as data volumes grow into the multi-petabyte range. For distributed enterprises, ExaGrid supports multi-site topologies, hub-and-spoke replication, and cloud-based disaster recovery deployments in AWS or Azure, offering flexibility across on-premises and hybrid environments.

Equally notable was ExaGrid’s emphasis on customer experience. Each customer is assigned a named Level-2 support engineer, installations are typically completed within hours, and all software updates are included without additional licensing fees. With a Net Promoter Score above 80 and a reported customer retention rate exceeding 95 percent, the company argues that operational simplicity and predictable economics are as critical as raw performance.

In a backup market increasingly shaped by cyber risk, regulatory pressure, and escalating data volumes, ExaGrid’s message was clear: while backup software evolves and cloud strategies diversify, resilient, efficient, and recoverable storage remains the foundation. By focusing narrowly on that layer, ExaGrid believes it has carved out a durable role in an industry undergoing structural change.

Share:

Tuesday, October 14, 2025

HYCU accelerates on SaaS data protection with advanced resiliency features

As enterprises race toward SaaS, hybrid cloud, and AI-driven architectures, HYCU argues that data resilience has become one of the most underestimated risks in modern IT. At The IT Press Tour #64 presentation, the data protection specialist laid out a stark message: cloud adoption has outpaced organizations’ ability to recover when things go wrong.

HYCU, which now protects more than 4,600 organizations across 78 countries, positions itself as a response to what it calls the “illusion of safety” in SaaS and cloud platforms. While cloud services promise availability, the presentation highlighted numerous real-world incidents - from accidental deletions to misconfigured scripts and prolonged regional outages - demonstrating that business-critical data can disappear without warning. According to HYCU’s own 2025 State of SaaS Resilience Report, 65% of organizations experienced at least one SaaS data breach in the past year, and nearly all enterprises increased their SaaS usage over the last three years.

The company’s central thesis is that resilience must be built at the platform level, not bolted on through fragmented tools. Today’s backup market, HYCU argues, is riddled with silos: separate products for SaaS, cloud workloads, databases, data lakes, and AI pipelines - often with different consoles, policies, and storage constraints. This fragmentation drives up costs, creates blind spots, and leaves organizations exposed to ransomware, insider threats, and supply chain attacks.


HYCU’s answer is an extensible, workload-aware data protection platform designed around customer ownership and control. A defining principle is that HYCU does not store customer data. Instead, backups remain in customer-owned storage - on-premises or in the public cloud - preserving sovereignty, compliance, and flexibility. Encryption, immutability, and policy-driven automation are applied consistently across SaaS, IaaS, PaaS, and emerging AI/ML workloads.

The presentation placed particular emphasis on SaaS resilience, an area many enterprises still overlook. HYCU now supports more than 90 SaaS integrations, including platforms such as Microsoft 365, Google Workspace, GitHub, Jira, Box, Salesforce, and iManage Cloud. Beyond traditional backup and restore, the company has expanded into SaaS disaster recovery, offline recovery, and customer-readable copies—allowing organizations to access data even if a SaaS provider suffers a prolonged outage or supply-chain compromise.

AI and data lake protection emerged as another major theme. As cloud object storage becomes the new system of record for analytics and AI, HYCU highlighted how data recreation costs - egress fees, pipeline rebuilds, and lost productivity - can far exceed the cost of proper protection. Its platform now delivers atomic backups, granular recovery, long-term retention, and significant storage efficiencies—claiming reductions of more than 40:1 in some scenarios.

Security is further reinforced through HYCU R-Shield, a set of capabilities focused on cyber resilience. Unlike approaches that require sending data to vendor-controlled scanning environments, R-Shield performs malware detection at the source, maintaining full customer control. Combined with immutable, “break-glass” backups through HYCU R-Lock, the platform is designed to meet modern ransomware recovery requirements across hybrid, cloud, and SaaS environments.

In closing, HYCU framed resilience not as an IT feature, but as a business imperative. As regulatory pressure increases and downtime costs escalate, the company’s message was clear: organizations must assume disruption will happen - and design recovery strategies that work across any cloud, any application, and any future workload.

Share:

Wednesday, October 01, 2025

64th Edition of The IT Press Tour back in New-York

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 64th edition scheduled the week of October 6th in New-York City.

During this edition, the press group will meet 7 companies:
  • Arcitecta, a reference in data management,
  • AuriStor, a long time player in AFS based solution,
  • CTERA, a pioneer in distributed file services,
  • ExaGrid, a leader in secondary storage,
  • HYCU, an innovator cloud and SaaS data protection,
  • Shade, a fast growing M&E storage software,
  • and TextQL, a young AI player simplifying big data access.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share: