Friday, January 30, 2026

Novodisk, one of the densest flash array on the planet

Novodisq from New-Zealand joined the recent IT Press Tour last week and it was a good time to get an update following the very first article I published on StorageNewsletter last August following FMS 2025 where I spoke with Robbie Litchfield and discovered the company and innovative product.

Novodisq is a New Zealand–based hardware and systems company focused on re-engineering data infrastructure to address the growing constraints of power, space, and data sovereignty in modern data centers. Founded in 2018, the company aims to become the backbone for sovereign and AI-ready data lakes by delivering ultra-dense, ultra-efficient storage and compute platforms designed for long-lived, data-heavy workloads. Novodisq positions its technology as a response to the rapid growth of global data—estimated at 20–30% annually—at a time when data-center power availability, cooling capacity, and physical space are increasingly limited.


The core problem Novodisq addresses is that most enterprise and AI data is neither hot nor archival, but “warm” data that must remain online, accessible, and retained for many years. This layer is traditionally served by power-hungry spinning disks or costly hyperscaler services, both of which scale poorly under today’s power and sovereignty constraints. As AI workloads grow, GPU clusters increasingly sit idle due to data ingestion bottlenecks, power shortages, and inefficient storage economics. Governments and enterprises are also demanding greater control over where data is stored and processed, driving interest in sovereign, on-premises infrastructure. 


Novodisq’s solution is a modular, hardware-first architecture optimized for density, efficiency, and control. The flagship product, Novoblade™, is a 2U blade system that integrates high-density storage and compute in a single chassis. Each blade delivers up to roughly 576 TB, and a fully populated 2U system with 20 blades scales to about 11.5 PB while using up to 90–95% less power than traditional HDD- or flash-based storage systems. The design emphasizes watts-per-petabyte efficiency, enabling deployments in power-constrained data centers, regional facilities, or even edge locations. 


A key differentiator is Novodisq’s vertically integrated hardware approach. After early proofs of concept using off-the-shelf components, the company chose to design its own SSDs, firmware, and system architecture to tightly control power consumption, cooling, and long-term reliability. The platform is built around low-power SoCs and FPGA-based acceleration, offloading functions such as RAID, checksumming, encryption, and data processing from CPUs. This enables high performance for write-once, read-sometimes data over a targeted 10-year hardware lifespan, with design trade-offs optimized for long-term retention rather than peak IOPS. 


Alongside Novoblade, Novoforge™ serves as a development and pilot platform, allowing customers to test workloads, validate software stacks, and experiment with FPGA-accelerated data processing in a smaller, lab-friendly form factor. Use cases highlighted include genomics and pathology data, backup and restore staging, CCTV and NVR systems, Kubernetes and microservices clusters, and private cloud environments requiring strict data sovereignty. In several scenarios, Novodisq emphasizes the ability to ingest, process, and store data locally—reducing reliance on hyperscalers and improving time-to-recovery and operational resilience.


Novodisq is currently at an early commercial stage, engaging pilot customers and selling MVP hardware, with plans to layer in support and software services over time. Overall, the company positions itself as a dense, power-efficient alternative to legacy storage vendors and hyperscalers, enabling organizations to deploy AI-ready, sovereign data infrastructure in a world increasingly constrained by energy, space, and regulation.

Share:

Thursday, January 29, 2026

Scale Computing, a new era to address modern challenges

Scale Computing joined The IT Press Tour this week in Silicon Valley and the moment was perfect to get an update on the company, products and globally the strategy following the acquisition by Acumera a few months ago.

Scale Computing positions itself as a specialized edge computing and networking software company focused on simplifying IT operations, improving resilience, and enabling distributed application deployment across hybrid environments. Following its acquisition by Acumera, the combined company aims to deliver an integrated edge platform spanning compute, networking, security, and orchestration, accelerating Scale Computing’s original vision of resilient, easy-to-operate infrastructure at the edge.



The company defines the “edge” broadly as mission-critical applications running outside centralized data centers or cloud environments, including retail stores, factories, remote sites, ships, and branch offices. Drivers for edge deployment include cost control, latency-sensitive workloads (especially AI inference), regulatory requirements, and resilience in disconnected or low-connectivity environments. Scale Computing emphasizes that operational scalability—deployment, updates, monitoring, and recovery across thousands of distributed sites—is a key challenge for enterprises adopting edge computing. 



Scale Computing’s core platform components include SC//HyperCore, a hyperconverged infrastructure virtualization stack combining compute, storage, and virtualization with self-healing automation and data protection; SC//Fleet Manager, a cloud-based orchestration platform for multi-site visibility, zero-touch provisioning, and application lifecycle management; SC//Reliant Platform, an edge-computing-as-a-service offering focused on large distributed enterprises; and SC//AcuVigil, a managed networking and security service providing SD-WAN, firewalling, compliance monitoring, and endpoint observability. Together, these components unify infrastructure, application deployment, and network management into a single edge platform. 



The acquisition by Acumera adds networking and managed services capabilities, complementing Scale Computing’s virtualization strengths and enabling a full-stack edge solution. The company highlights strong growth driven by VMware migration demand, as enterprises seek alternatives following Broadcom’s acquisition and pricing changes. Channel partners and SMB/midmarket customers are key targets, alongside large global enterprises and retailers. 



Use cases span retail, logistics, government, and industrial environments, with examples including POS systems, surveillance, IoT analytics, and AI-powered applications at the edge. Case studies include distributed infrastructure modernization and AI-driven drive-through automation deployments. Overall, Scale Computing positions itself as a purpose-built edge infrastructure platform enabling enterprises to run critical applications reliably, securely, and cost-effectively across distributed environments with minimal operational overhead.
Share:

Tuesday, January 27, 2026

Towards IT automation thanks to AI Agents

Helikai joined The IT Press Tour this week in California and it was a pleasure to meet again Jamie Lerner, former CEO of Quantum, and Ross Fujii, previously also at Quantum as CDO.

Helikai is a mission-driven AI company focused on accelerating enterprise business transformation through specialized AI agents that automate discrete workflows and deliver measurable business outcomes. Its core philosophy is "micro AI": purpose-built agents that perform narrowly defined tasks with enterprise-grade accuracy and predictable cost, scope, and timelines, rather than broad, general-purpose AI models. This approach is designed to reduce hallucinations, improve reliability, and enable rapid deployment of automation in real-world business environments.


The Helikai platform consists of several key components. Helibots are pre-built AI agents for specific workflows across enterprise IT, healthcare, media and entertainment, legal, and data infrastructure. SPRAG (Secure Private Retrieval Augmented Generation) integrates large language models with private enterprise data in secure on-premises or isolated cloud environments, providing grounded, traceable, and compliant AI outputs. KaiFlow is a human-in-the-loop orchestration layer that embeds oversight, audit trails, and decision checkpoints into automated workflows, while Mālama optimizes AI performance and resource consumption for scalable deployment. 


Helikai’s engagement model starts with AI workshops to assess organizational maturity (using frameworks such as MITRE’s AI Maturity Model), identify high-impact low-risk automation opportunities, and select appropriate agents. Agents are trained on customer-specific proprietary data, integrated into enterprise systems, validated against KPIs, and continuously updated as data and models evolve. Deployment options include fully on-premises, private cloud, hybrid, and SaaS models, with strict data isolation and security controls to address enterprise concerns about data leakage and compliance.


Use cases span multiple domains. In enterprise IT and business operations, agents automate document processing, ERP workflows, semantic search, onboarding, IT service desk tasks, analytics, and revenue optimization. In life sciences and healthcare, agents support experimental data capture, literature mining, clinical documentation, trial matching, and predictive population health analytics. Media and entertainment applications include content generation, translation, dubbing, metadata tagging, colorization, and automated workflows. The platform emphasizes combining deterministic automation with AI-driven components to achieve enterprise-grade accuracy and governance. 


Overall, Helikai positions itself as an enterprise-focused agentic AI platform that integrates tightly with corporate data and systems, enabling organizations to build proprietary AI capabilities, automate complex workflows, and achieve faster, more reliable business outcomes while maintaining strict security, governance, and human oversight.

Share:

Tuesday, January 20, 2026

Back in California for the 66th Edition of The IT Press Tour

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 66th edition scheduled the week of January 26 in Silicon Valley, CA.

During this edition, the press group will meet 8 hot and innovative organizations:
  • Globus, a widely used unstructured data management solution built by University of Chicago,
  • Helikai, a young agentic AI company launched by Jamie Lerner,
  • InfoScale, the commercial entity that promotes historical Veritas Software products,
  • The Lustre Collective, an independent organization assuring the development of Lustre,
  • Novodisq, a recent player based in New-Zealand building a very dense flash-based storage system,
  • Scale Computing, a reference in workloads consolidation for the edge,
  • VergeIO, a innovative server virtualization thsat replaces VMware,
  • and Zettalane, a flexible block and file SDS for the cloud.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Wednesday, December 17, 2025

Plakar to simplify and boost data protection at any scale

Plakar joined The IT Press Tour last week in Athens, Greece, to introduce its approach of data protection at any scale as it works for any configuration.

Plakar positions itself as a foundational layer for modern data resilience, addressing what it describes as a growing “resilience deadlock” caused by ransomware, cloud complexity, AI-driven attacks, and fragmented backup ecosystems. Founded in France and backed by €3M in funding (Seedcamp and prominent tech founders), Plakar combines an open-source core with an enterprise-grade control plane to redefine how organizations protect, store, and restore data across environments.


The company argues that data loss incidents are accelerating due to a convergence of threats: ransomware, cloud misconfiguration, SaaS and AI sprawl, insider risks, supply-chain attacks, and infrastructure failures. At the same time, security budgets grow far more slowly than attack surfaces, making prevention alone insufficient. In this context, Plakar frames backup and rapid recovery as the “last line of defense” when all other controls fail.

Plakar’s central thesis is that today’s backup market is structurally broken. Proprietary formats, vendor lock-in, and incompatible tools create an illusion of safety, while real recovery often fails after attacks. Existing architectures force trade-offs between encryption and efficiency (deduplication), or between security and interoperability. Plakar proposes solving this through an Open Resilience Standard built on transparency, auditability, and zero-trust principles.


At the technical core is the Plakar agent, which packages data from filesystems, object storage, SaaS platforms, and databases into self-contained, portable units called Klosets. These Klosets are encrypted end-to-end, immutable, verifiable, deduplicated, compressed, and fully browsable and searchable. Plakar likens Klosets to what containers did for compute: a standardized, portable abstraction that decouples data protection from infrastructure. Data can be backed up anywhere, stored anywhere (cloud, NAS, tape), and restored everywhere—without proprietary dependencies.

A key differentiator is the Plakar Vault Storage Protocol, which enables “trustless delegation.” Data is encrypted client-side, and encryption keys never leave the customer’s control. Cloud providers or MSPs can manage storage tiers, replication, and SLAs on opaque data blobs, enabling resilience-as-a-service without exposing sensitive data or creating centralized attack targets. This approach aims to reconcile compliance, sovereignty, and cost optimization.


Plakar recently announced Plakar Enterprise, a unified backup posture management platform delivered as a virtual appliance. It adds role-based access control, multi-user management, secret management integration, orchestration, SLA monitoring, compliance reporting, and centralized visibility across on-prem, cloud, and SaaS assets. The open-source community edition remains free, with guarantees against vendor lock-in: restores are always possible without a license.


Overall, Plakar positions itself not merely as another backup product, but as an open ecosystem and emerging standard for data resilience - designed for an era where recovery certainty, not just protection, determines business survival.

A fresh approach for a domain very often ignored or mis-addressed by companies and we expect a similar trajectory liek with see from famous usual suspects.

Share:

Monday, December 15, 2025

HyperBunker, a new dimension in cyber attack

HyperBunker, a newcomer in the data protection space, joined The IT Press Tour last week in Athens, Greece.

The company develops a hardware-based data protection solution designed to address one of the most critical failures in modern cybersecurity: the inability to reliably recover data after a ransomware attack. Built on insights from more than 50,000 real-world data recovery cases over 25 years, HyperBunker is positioned as a last-resort resilience layer for organizations whose connected defenses and cloud-based backups have already failed.


HyperBunker’s mission is to make recovery certain when everything else breaks down. Its vision is to establish a global standard for offline resilience, based on the principle that attackers can only compromise what they can reach. As ransomware attacks increase in scale, speed, and sophistication—accelerated further by AI-driven intrusion techniques—the presentation argues that traditional, credential-based and cloud-connected security models have become fundamentally unreliable. Industry data shows that most attacks remain undetected, that full domain compromise is often trivial, and that even well-funded security stacks frequently fail to stop modern ransomware variants.

The core problem HyperBunker addresses is not prevention, but guaranteed recovery. Organizations depend on a small set of trust-critical data—identity systems, financial records, operational configurations, regulatory archives, and customer or partner data. If these datasets are lost or corrupted, business continuity collapses regardless of how advanced other IT systems may be. HyperBunker is designed specifically to protect this “data that keeps organizations alive.”

Technically, HyperBunker is a fully offline, physical data vault. It has no credentials, no cloud APIs, and no external connectivity, making it unreachable by attackers. Data enters the system through a patented “butlering” unit that acts as a controlled airlock, enforcing double physical air-gapping between connected environments and the offline vault. Once inside, data is stored as immutable copies, with the most recent versions always preserved. Because the vault is never online, it is inherently resistant to ransomware, insider threats, credential theft, and even future quantum-based attacks.

HyperBunker is deliberately hardware-based, rejecting software-only or “logical air-gap” approaches that remain accessible through networks, credentials, or misconfigurations. The presentation contrasts this with cloud and software-defined backup systems, which may claim immutability or air-gapping but still expose attack paths. HyperBunker’s philosophy is simple: if attackers cannot see or reach the system, they cannot compromise it.

The solution is targeted at essential and highly regulated industries, including critical infrastructure, finance, healthcare, energy, manufacturing, and government. In these environments, downtime is not merely an IT inconvenience but a regulatory, safety, and operational failure. Validation includes more than 80 technical demonstrations, strong interest from insurers—most notably a listing by U.S. cyber insurer Cowbell—and early discussions with defense innovation organizations, all reinforcing the value of true offline recovery.


HyperBunker is delivered as Hardware-as-a-Service through a subscription model that includes the device, support, SLAs, and regular restore testing. This approach provides predictable costs while avoiding the unpredictable, often catastrophic financial impact of ransomware incidents. The company is backed by venture capital, has already delivered its first production units, and is scaling manufacturing and partner networks across Europe and beyond.

Overall, HyperBunker presents itself not as another cybersecurity tool, but as a governance-grade resilience layer—a final, untouchable vault that ensures organizations can recover when all connected systems fail.

We'll see how the company will penetrate the market in the coming months.

Share:

Friday, December 12, 2025

9LivesData continues the original NEC HydraStor product

9LivesData, a Polish storage company founded by veterans of large-scale enterprise storage R&D, participates this week to the 65th IT Press Tour in Athens, Greece. 

The firm introduced its flagship product high9stor, a TCO-optimized, scale-out enterprise secondary storage platform designed for backup and archival workloads. Led by CEO Cezary Dubnicki, formerly Head of Storage at NEC Labs Princeton, the company builds on more than 16 years of real-world experience delivering and supporting NEC HYDRAstor, one of the earliest and most scalable global-deduplication backup systems deployed at exabyte scale without data loss.


9LivesData positions high9stor squarely at the intersection of exponential data growth and rising infrastructure costs. The company targets the enterprise secondary storage market, where backup volumes continue to expand while organizations struggle with shrinking backup windows, slow restores, ransomware threats, and escalating total cost of ownership. The central promise of high9stor is to reduce TCO by around 20% today, with a roadmap toward 30% savings, without compromising performance, resiliency, or availability.

high9stor is a software-defined, scale-out backup storage system built on commodity hardware. Using dense 1U nodes with up to 240 TB per rack unit, it scales linearly to roughly 180 nodes and more than 40 PB of raw capacity. Capacity and performance grow together as nodes are added, avoiding the bottlenecks typical of scale-up architectures. The system employs inline global deduplication and compression, significantly reducing stored data volumes while also accelerating backup ingestion.

A core architectural differentiator is the use of distributed, multi-controller algorithms. Background operations such as space reclamation, rebalancing, and integrity checks are executed in parallel across all nodes, rather than by a single controller. This allows high9stor to reclaim capacity in hours instead of weeks, even at very high utilization levels. The platform is designed for non-stop operation, supporting online expansion, hardware refresh, and up to three generations of nodes in a single cluster, eliminating forklift upgrades.

High availability and durability are achieved through erasure coding, allowing the system to tolerate multiple disk or node failures with far lower capacity overhead than traditional replication. Integrated WORM (write-once, read-many) functionality, combined with object lock support and tight integration with leading backup applications, provides strong protection against ransomware and accidental deletion. WAN-optimized, dedup-aware replication enables efficient disaster recovery across sites.


Compatibility with existing backup ecosystems is a key focus. high9stor supports standard interfaces such as NFS, CIFS, and S3, as well as deep integrations with major backup vendors including Cohesity NetBackup (OST), Veeam, Commvault, and Nakivo. This allows enterprises to consolidate multiple backup streams into a single global deduplication pool while preserving application-specific optimizations.

From a business perspective, high9stor is sold as a software subscription priced by raw capacity per month, making costs transparent and predictable. The company targets large enterprises, financial institutions, telecoms, utilities, healthcare, media, and government organizations, with a particular focus on EMEA and Central Asia. Real-world case studies, including large financial institutions operating hundreds of nodes across multiple data centers, underline the platform’s maturity and operational stability.

Overall, 9LivesData presents high9stor as a next-generation backup storage platform that combines proven architectural principles, modern scale-out design, and aggressive TCO optimization—positioning it as a compelling alternative to traditional backup appliances and legacy scale-up systems in an era of relentless data growth.

We'll make some checks at different to measure progress of the team and product on the market in the coming months.

Share:

Wednesday, December 10, 2025

Ewigbyte, new project to preserve data over long term on glass

Ewigbyte joined The IT Press Tour this week in Athens, Greece. The company introduces its vision for a new paradigm in cold data storage-one that is secure, sovereign, and environmentally sustainable. The company argues that the digital age, driven by explosive data growth, artificial intelligence, and rising energy constraints, has reached a breaking point where traditional storage technologies can no longer scale economically or sustainably. Cold data—information that must be retained for long periods but is rarely accessed - has become the critical bottleneck in global data infrastructure.


ewigbyte frames the challenge through powerful macro trends. Data volumes are growing faster than enterprise storage production capacity, creating a projected supply gap of roughly 15 zettabytes by 2030, or around 50% shortfall. At the same time, storage costs are rising sharply, with hard drives and SSDs experiencing double-digit annual price increases. The industry’s dependence on HDDs, SSDs, and magnetic tape - technologies that are decades old and prone to failure, degradation, and environmental risk - makes the current trajectory unsustainable in terms of energy use, electronic waste, water consumption, and CO₂ emissions.

The company’s core proposition is to rethink cold storage from first principles. Instead of optimizing for density and write speed, ewigbyte prioritizes durability, security, and minimal climate impact. Its solution is based on writing data directly onto glass using photonic technology. Data blocks are "burned" into ultra-thin glass media with ultra-short pulse UV lasers and structured light modulators, without toxic coatings. The result is a write-once, immutable storage medium designed to last more than 10,000 years, resistant to heat, humidity, electromagnetic pulses, radiation, and cyber threats such as ransomware. Because the stored data requires no power to maintain, its operational climate impact is effectively zero.

ewigbyte emphasizes that glass ablation is not experimental science but a proven industrial process already used in other manufacturing contexts. The company’s innovation lies in its modified optical system and its ability to integrate laser writing, robotics, and physical data warehousing into a scalable storage service. Rather than conventional data centers, ewigbyte envisions physical data warehouses where glass-based storage cubes are catalogued, stored securely, and retrieved when needed.


From a market perspective, ewigbyte positions itself as a long-term successor to tape and HDD-based archives. As SSDs dominate hot and warm data tiers, cold data will increasingly migrate to optical and photonic media. The company outlines a staged go-to-market approach, beginning with paid pilot projects focused on WORN (write once, read never) use cases, followed by WORM archival data centers, and eventually scaling toward broader cold and warm data services as economies of scale are achieved. Key applications include compliance archives, backups, hyperscaler archives, and data sets with low read frequency but strict durability and sovereignty requirements.

The roadmap projects an MVP in 2026, the demonstration of the first dedicated archival data center by 2028, and large-scale operational facilities by 2029. Supported by an experienced founding team with legal, technical, and industry expertise, ewigbyte presents itself as a foundational technology company aiming to redefine how humanity preserves data for centuries - shifting cold storage from an energy-hungry liability into a permanent, sustainable asset.

We'll monitor carefully the progress made by the team as it is a key European initiative.

Share:

Thursday, November 27, 2025

65th Edition of The IT Press Tour in Athens, Greece

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 65th edition scheduled December 9 and 10 in Athens, Greece.

During this edition, the press group will meet:
  • 9LivesData, the developer of NEC Hydrastor, introducing a new product named high9stor compatible with HYDRAstor,
  • Enakta Labs, a recognized expert team on DAOS,
  • Ewigbyte, an innovator around long-term data preservation on glass,
  • HyperBunker, an interesting dual air-gap model,
  • Plakar, a fast growing backup and recovery open source software,
  • and Severalnines, a reference in DBaaS.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Tuesday, November 11, 2025

Recap of the 63rd IT Press Tour in Amsterdam, Netherlands

Initially posted on StorageNewsletter November 6, 2025

The 63rd edition of The IT Press Tour took place in Amsterdam a few weeks ago in September. The event constituted an effective forum through which the press group and participating organizations engaged in extensive dialogue on IT infrastructure, cloud computing, networking, cybersecurity, data management and storage, big data and analytics, as well as the overarching integration of AI in these fields. Six organizations participated in the tour, listed here in alphabetical order: CompressionX, DDP/Ardis, EuroNAS, OpenMP, Oxibox and Stackable.

CompressionX is a modern file compression company designed to address the rapidly growing environmental and operational challenges associated with global data storage. As data centers now consume around 3% of the world’s electricity, require massive cooling systems, and contribute significantly to CO₂ emissions, the demand for more efficient data handling has never been greater. CompressionX’s mission is to “save the planet, one file at a time” by reducing the storage footprint of digital information without compromising data integrity.

Founded in 2012, the company began with a mathematical insight that led to the development of its core compression algorithm. After years of refinement and automation, the beta product launched in 2025. CompressionX focuses on delivering a clean, intuitive user experience that eliminates the frustrations common in legacy compression tools, such as slow processing speeds, clunky interfaces, compatibility gaps, and confusing security settings. Its solution provides one-click compression and extraction, secure-by-default encryption, transparent pricing, and seamless cross-platform performance.

The technology uses an intelligent algorithm designed to analyze data and determine the optimal compression strategy in a single pass. This makes it especially effective for large or complex datasets. Key use cases include high-fidelity audio streaming, LiDAR and sensor data transmission, aviation data management, and large-scale IoT device ecosystems—all environments where efficient, fast, and secure data handling is essential.

CompressionX differentiates itself through modern architecture, sustainability as a core value, and a strong user-centric approach. The product roadmap includes multi-threaded performance boosts, customizable compression settings for professionals, automated “cold data” detection, cloud integration, and mobile/web decompression access. Pricing ranges from a free tier for individual users to enterprise solutions with advanced controls.

To reinforce its environmental mission, CompressionX partners with sustainability projects such as methane leakage prevention, seagrass ecosystem restoration, elephant anti-poaching support, and rainwater harvesting land regeneration. Overall, CompressionX positions itself as a greener, smarter, and more efficient future for data storage and transfer.


Ardis Technologies presents its DDP shared storage solutions designed specifically for the Media & Entertainment (M&E) industry, where high bandwidth, real-time collaboration, and large unstructured video/audio files require different infrastructure than typical IT storage. The company’s core innovation is its A/V FS file system, a high-availability SAN file system built in-house to support project-based workflows, native Avid project sharing, bin locking, and folder-based access rights. Unlike standard NAS systems that rely on SMB/NFS and block-level caching, DDP uses iSCSI block I/O with file-based caching, enabling Project Caching—a key differentiator.

Project Caching allows active project data to be stored on SSD-based cache for fast access while keeping full project content on high-capacity spinning disks, giving editors “SSD performance with HDD capacity” without disruptive copying or relinking. Data can exist simultaneously in cache and on disks, enabling seamless internal data movement while projects are in use. This is critical in post-production environments where multiple editors work on the same material.

Ardis offers a wide product range: MicroDDP portable units, Hybrid DDP systems with mixed SSD/HDD packs, miniDDP all-SSD performance systems, DDP10EF NVMe-based high-throughput systems, storage expansion arrays, and Dual High-Availability DDPHead systems with redundant controllers for mission-critical workflows. Systems support Ethernet from 1GbE to 200GbE, NVMe-oF/RDMA, and Fibre Channel depending on performance needs.

Security is emphasized through on-premise workflows, controlled ingest via transfer rooms, anti-virus and checksum verification, RAID6 protection, backup strategies, and air-gapped copies – positioning DDP as safer than cloud-centric storage for sensitive media.

Overall, DDP provides scalable, high-performance shared storage purpose-built for video and film production, combining workflow speed, capacity efficiency, and strong security for studios, broadcasters, and post-production facilities.


EuroNAS is a Munich-based storage and virtualization software company founded in 2005, with development and support teams across Europe. Their mission is to make enterprise storage, high availability, and virtualization both powerful and simple, eliminating complexity and vendor lock-in. They position themselves as an alternative to costly proprietary storage appliances and difficult open-source stacks, offering enterprise capabilities with a user-friendly web interface and personal support.

EuroNAS provides several product families. euroNAS Premium is a high-performance storage OS supporting SMB, NFS, iSCSI, NVMe-oF, and snapshots, used for file servers, backup repositories, media workflows, and surveillance data. euroNAS HA Cluster delivers high-availability storage via mirrored or dual-controller shared storage configurations, ensuring continuous access even during hardware failures – ideal for business-critical data, healthcare, finance, and 24/7 environments.

For scalable deployments, eEKAS is EuroNAS’s Ceph-based scale-out system, offering unified file, block, and S3 object storage with GUI-based management and horizontal scalability to tens of nodes. It is used for video archives, research, cloud services, and large imaging datasets. EuroNAS also offers eEVOS, a hyper-converged virtualization platform combining compute, storage, and backup in one solution. It supports VM management, live migration, high availability, integrated backup, Ceph-based VSAN alternative, and multi-node expansion—positioning it as a cost-effective, simpler alternative to VMware or Proxmox.

Key strengths include freedom from hardware lock-in, intuitive GUI (no Linux expertise required), enterprise reliability, integrated high availability, and direct human support from storage experts. EuroNAS sells via OEM and channel partners like Exertis Hammer and Broadberry, and continues expanding features such as multi-tenancy, S3 support across products, and enhanced virtual networking. 

Overall, EuroNAS aims to deliver flexible, scalable, and affordable enterprise storage and virtualization without complexity or vendor dependency.


The session introduces OpenMP, an industry-standard API for parallel programming on shared-memory systems, accelerators, and heterogeneous computing architectures. Managed by the OpenMP Architecture Review Board (ARB), a non-profit organization, OpenMP provides a directive-based programming model that allows developers to express parallelism in C, C++ and Fortran while maintaining portability and high productivity. The ARB includes major hardware and software vendors who collaborate to evolve and maintain the specification, ensuring broad cross-platform support.

Originally created in 1997 to unify fragmented shared-memory programming models, OpenMP has grown significantly. Early versions focused on multi-core CPUs, but more recent releases (OpenMP 4.0 and beyond) introduced GPU and accelerator offloading, SIMD optimizations, task-based parallelism, memory hierarchy management, and support for modern C/C++ and Fortran standards. OpenMP 6.0 continues to enhance accelerator support, simplify loop transformations, and add features for asynchronous and event-driven parallelism.

OpenMP addresses major challenges in HPC, such as programming complexity on heterogeneous systems, performance portability across different architectures, memory hierarchy optimization, and support for irregular workloads. The model spans multiple levels of parallelism, from vectorization on a single core to distributed execution using hybrid models alongside MPI.

Real-world use cases include autonomous driving software optimization, COVID-19 drug discovery acceleration, quantum chemistry simulations, and turbulence modeling, demonstrating significant speedups through GPU offloading and efficient task scheduling.

Compared to other frameworks, OpenMP stands out for its simplicity, portability, and vendor support, while remaining interoperable with lower-level models such as CUDA, SYCL, MPI, and vendor-specific toolchains. Future roadmap directions include improved multi-device execution, data-flow parallelism, Python integration, and further simplification of heterogeneous programming.

Overall, OpenMP aims to make parallel computing more accessible, scalable, and efficient across CPUs, GPUs, embedded systems, and supercomputers.


Oxibox is a French cybersecurity company founded in 2014 that focuses on secure-by-design backup and cyber-resilience for businesses and public organizations, addressing the growing threat of ransomware. Traditional backup solutions are increasingly targeted by attackers: backups are often encrypted, deleted, or used to propagate attacks. Oxibox’s mission is to ensure every organization can restore operations quickly and safely, even during an active cyber incident.

The company’s core innovation is its R2V and UDP (Universal Data Protection) technology, which provides air-gapped, encrypted, and corruption-resistant backups automatically. By isolating backup environments at the filesystem level and applying behavioral analysis to detect abnormal write patterns, Oxibox prevents attackers from altering stored data. Backups can be restored instantly, with automatic testing and the ability to launch systems as cross-hypervisor virtual machines, enabling business continuity during recovery.

Oxibox is designed to be simple to deploy (about 30 minutes), compatible with all environments (cloud, on-prem, workstations, NAS, hyperconverged infrastructure), and cost-efficient compared to tape or immutable storage. It offers both on-prem appliances and cloud-based storage, with volume-based pricing and no license lock-in. The company has strong traction in the mid-market and public sector, with more than 6,000 customers and 4,000 public entities protected, and partnerships with Docaposte, Airbus Cyber, and major distributors such as EET.

Oxibox targets organizations between 100 and 1500 employees—often underserved by traditional backup vendors yet heavily targeted by ransomware. The solution ensures resilience across all maturity levels, forming the foundational layer of cybersecurity: protect, respond, remediate, and restart. The roadmap includes expanded international distribution, higher-performance 100 Gbps backup capabilities, deeper hypervisor support, and a full cyber-resilience platform.

Overall, Oxibox positions itself as the first backup solution specifically engineered for ransomware-era threats, ensuring that recovery is always possible—without compromise.


Stackable is a company founded in 2020 that provides a modular, open-source and Kubernetes-native data platform designed to help organizations build and manage modern data architectures without vendor lock-in. The platform brings together a curated suite of popular, production-proven open-source data tools—such as Apache Kafka, NiFi, Spark, Trino, Druid, HBase, Superset, ZooKeeper and Hadoop—and integrates them into a unified, consistent operating model that works on-premises, in the cloud, or in hybrid environments.

Stackable’s mission is to solve the complexity that organizations face when building their own data platforms from fragmented open-source components or relying on costly, proprietary cloud services. With Stackable, customers can maintain full data sovereignty, avoid dependency on single vendors, and keep their data within European security and compliance frameworks. The platform supports standard monitoring, logging, certificate management, authentication integration (LDAP, Kerberos, OIDC), and vulnerability management with signed SBOMs and VEX advisories, ensuring supply-chain transparency and secure operations.

A key architectural principle is “Data Platform as Code”, allowing platform configurations to be defined declaratively, deployed repeatedly, and automated using GitOps workflows. Stackable provides operators that automate cluster lifecycle management, version updates, scaling, and day-2 operations across multiple environments.

The company offers multiple service models: do-it-yourself open source, paid subscriptions with support, consulting for architecture and migration, training, and fully managed deployments (including hosted versions via IONOS). Customers include public sector, financial services, manufacturing, smart cities, and GAIA-X data space initiatives, where data sharing and trust are crucial.

Overall, Stackable delivers a flexible, secure, and scalable open-source data platform that reduces complexity, increases agility, and empowers organizations to control their data infrastructure on their own terms.

Share: