Thursday, March 26, 2026

67th Edition of The IT Press Tour in Sofia, Bulgaria

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 67th edition organized March 31st and April 1st in Sofia, Bulgaria.

During this edition, the press group will meet 6 hot and innovative companies:

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Friday, January 30, 2026

Novodisq, one of the densest flash array on the planet

Novodisq from New-Zealand joined the recent IT Press Tour last week and it was a good time to get an update following the very first article I published on StorageNewsletter last August following FMS 2025 where I spoke with Robbie Litchfield and discovered the company and innovative product.

Novodisq is a New Zealand–based hardware and systems company focused on re-engineering data infrastructure to address the growing constraints of power, space, and data sovereignty in modern data centers. Founded in 2018, the company aims to become the backbone for sovereign and AI-ready data lakes by delivering ultra-dense, ultra-efficient storage and compute platforms designed for long-lived, data-heavy workloads. Novodisq positions its technology as a response to the rapid growth of global data—estimated at 20–30% annually—at a time when data-center power availability, cooling capacity, and physical space are increasingly limited.


The core problem Novodisq addresses is that most enterprise and AI data is neither hot nor archival, but “warm” data that must remain online, accessible, and retained for many years. This layer is traditionally served by power-hungry spinning disks or costly hyperscaler services, both of which scale poorly under today’s power and sovereignty constraints. As AI workloads grow, GPU clusters increasingly sit idle due to data ingestion bottlenecks, power shortages, and inefficient storage economics. Governments and enterprises are also demanding greater control over where data is stored and processed, driving interest in sovereign, on-premises infrastructure. 


Novodisq’s solution is a modular, hardware-first architecture optimized for density, efficiency, and control. The flagship product, Novoblade™, is a 2U blade system that integrates high-density storage and compute in a single chassis. Each blade delivers up to roughly 576 TB, and a fully populated 2U system with 20 blades scales to about 11.5 PB while using up to 90–95% less power than traditional HDD- or flash-based storage systems. The design emphasizes watts-per-petabyte efficiency, enabling deployments in power-constrained data centers, regional facilities, or even edge locations. 


A key differentiator is Novodisq’s vertically integrated hardware approach. After early proofs of concept using off-the-shelf components, the company chose to design its own SSDs, firmware, and system architecture to tightly control power consumption, cooling, and long-term reliability. The platform is built around low-power SoCs and FPGA-based acceleration, offloading functions such as RAID, checksumming, encryption, and data processing from CPUs. This enables high performance for write-once, read-sometimes data over a targeted 10-year hardware lifespan, with design trade-offs optimized for long-term retention rather than peak IOPS. 


Alongside Novoblade, Novoforge™ serves as a development and pilot platform, allowing customers to test workloads, validate software stacks, and experiment with FPGA-accelerated data processing in a smaller, lab-friendly form factor. Use cases highlighted include genomics and pathology data, backup and restore staging, CCTV and NVR systems, Kubernetes and microservices clusters, and private cloud environments requiring strict data sovereignty. In several scenarios, Novodisq emphasizes the ability to ingest, process, and store data locally—reducing reliance on hyperscalers and improving time-to-recovery and operational resilience.


Novodisq is currently at an early commercial stage, engaging pilot customers and selling MVP hardware, with plans to layer in support and software services over time. Overall, the company positions itself as a dense, power-efficient alternative to legacy storage vendors and hyperscalers, enabling organizations to deploy AI-ready, sovereign data infrastructure in a world increasingly constrained by energy, space, and regulation.

Share:

Thursday, January 29, 2026

Scale Computing, a new era to address modern challenges

Scale Computing joined The IT Press Tour this week in Silicon Valley and the moment was perfect to get an update on the company, products and globally the strategy following the acquisition by Acumera a few months ago.

Scale Computing positions itself as a specialized edge computing and networking software company focused on simplifying IT operations, improving resilience, and enabling distributed application deployment across hybrid environments. Following its acquisition by Acumera, the combined company aims to deliver an integrated edge platform spanning compute, networking, security, and orchestration, accelerating Scale Computing’s original vision of resilient, easy-to-operate infrastructure at the edge.



The company defines the “edge” broadly as mission-critical applications running outside centralized data centers or cloud environments, including retail stores, factories, remote sites, ships, and branch offices. Drivers for edge deployment include cost control, latency-sensitive workloads (especially AI inference), regulatory requirements, and resilience in disconnected or low-connectivity environments. Scale Computing emphasizes that operational scalability—deployment, updates, monitoring, and recovery across thousands of distributed sites—is a key challenge for enterprises adopting edge computing. 



Scale Computing’s core platform components include SC//HyperCore, a hyperconverged infrastructure virtualization stack combining compute, storage, and virtualization with self-healing automation and data protection; SC//Fleet Manager, a cloud-based orchestration platform for multi-site visibility, zero-touch provisioning, and application lifecycle management; SC//Reliant Platform, an edge-computing-as-a-service offering focused on large distributed enterprises; and SC//AcuVigil, a managed networking and security service providing SD-WAN, firewalling, compliance monitoring, and endpoint observability. Together, these components unify infrastructure, application deployment, and network management into a single edge platform. 



The acquisition by Acumera adds networking and managed services capabilities, complementing Scale Computing’s virtualization strengths and enabling a full-stack edge solution. The company highlights strong growth driven by VMware migration demand, as enterprises seek alternatives following Broadcom’s acquisition and pricing changes. Channel partners and SMB/midmarket customers are key targets, alongside large global enterprises and retailers. 



Use cases span retail, logistics, government, and industrial environments, with examples including POS systems, surveillance, IoT analytics, and AI-powered applications at the edge. Case studies include distributed infrastructure modernization and AI-driven drive-through automation deployments. Overall, Scale Computing positions itself as a purpose-built edge infrastructure platform enabling enterprises to run critical applications reliably, securely, and cost-effectively across distributed environments with minimal operational overhead.
Share:

Tuesday, January 27, 2026

Towards IT automation thanks to AI Agents

Helikai joined The IT Press Tour this week in California and it was a pleasure to meet again Jamie Lerner, former CEO of Quantum, and Ross Fujii, previously also at Quantum as CDO.

Helikai is a mission-driven AI company focused on accelerating enterprise business transformation through specialized AI agents that automate discrete workflows and deliver measurable business outcomes. Its core philosophy is "micro AI": purpose-built agents that perform narrowly defined tasks with enterprise-grade accuracy and predictable cost, scope, and timelines, rather than broad, general-purpose AI models. This approach is designed to reduce hallucinations, improve reliability, and enable rapid deployment of automation in real-world business environments.


The Helikai platform consists of several key components. Helibots are pre-built AI agents for specific workflows across enterprise IT, healthcare, media and entertainment, legal, and data infrastructure. SPRAG (Secure Private Retrieval Augmented Generation) integrates large language models with private enterprise data in secure on-premises or isolated cloud environments, providing grounded, traceable, and compliant AI outputs. KaiFlow is a human-in-the-loop orchestration layer that embeds oversight, audit trails, and decision checkpoints into automated workflows, while Mālama optimizes AI performance and resource consumption for scalable deployment. 


Helikai’s engagement model starts with AI workshops to assess organizational maturity (using frameworks such as MITRE’s AI Maturity Model), identify high-impact low-risk automation opportunities, and select appropriate agents. Agents are trained on customer-specific proprietary data, integrated into enterprise systems, validated against KPIs, and continuously updated as data and models evolve. Deployment options include fully on-premises, private cloud, hybrid, and SaaS models, with strict data isolation and security controls to address enterprise concerns about data leakage and compliance.


Use cases span multiple domains. In enterprise IT and business operations, agents automate document processing, ERP workflows, semantic search, onboarding, IT service desk tasks, analytics, and revenue optimization. In life sciences and healthcare, agents support experimental data capture, literature mining, clinical documentation, trial matching, and predictive population health analytics. Media and entertainment applications include content generation, translation, dubbing, metadata tagging, colorization, and automated workflows. The platform emphasizes combining deterministic automation with AI-driven components to achieve enterprise-grade accuracy and governance. 


Overall, Helikai positions itself as an enterprise-focused agentic AI platform that integrates tightly with corporate data and systems, enabling organizations to build proprietary AI capabilities, automate complex workflows, and achieve faster, more reliable business outcomes while maintaining strict security, governance, and human oversight.

Share:

Tuesday, January 20, 2026

Back in California for the 66th Edition of The IT Press Tour

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 66th edition scheduled the week of January 26 in Silicon Valley, CA.

During this edition, the press group will meet 8 hot and innovative organizations:
  • Globus, a widely used unstructured data management solution built by University of Chicago,
  • Helikai, a young agentic AI company launched by Jamie Lerner,
  • InfoScale, the commercial entity that promotes historical Veritas Software products,
  • The Lustre Collective, an independent organization assuring the development of Lustre,
  • Novodisq, a recent player based in New-Zealand building a very dense flash-based storage system,
  • Scale Computing, a reference in workloads consolidation for the edge,
  • VergeIO, a innovative server virtualization thsat replaces VMware,
  • and Zettalane, a flexible block and file SDS for the cloud.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share:

Wednesday, December 17, 2025

Plakar to simplify and boost data protection at any scale

Plakar joined The IT Press Tour last week in Athens, Greece, to introduce its approach of data protection at any scale as it works for any configuration.

Plakar positions itself as a foundational layer for modern data resilience, addressing what it describes as a growing “resilience deadlock” caused by ransomware, cloud complexity, AI-driven attacks, and fragmented backup ecosystems. Founded in France and backed by €3M in funding (Seedcamp and prominent tech founders), Plakar combines an open-source core with an enterprise-grade control plane to redefine how organizations protect, store, and restore data across environments.


The company argues that data loss incidents are accelerating due to a convergence of threats: ransomware, cloud misconfiguration, SaaS and AI sprawl, insider risks, supply-chain attacks, and infrastructure failures. At the same time, security budgets grow far more slowly than attack surfaces, making prevention alone insufficient. In this context, Plakar frames backup and rapid recovery as the “last line of defense” when all other controls fail.

Plakar’s central thesis is that today’s backup market is structurally broken. Proprietary formats, vendor lock-in, and incompatible tools create an illusion of safety, while real recovery often fails after attacks. Existing architectures force trade-offs between encryption and efficiency (deduplication), or between security and interoperability. Plakar proposes solving this through an Open Resilience Standard built on transparency, auditability, and zero-trust principles.


At the technical core is the Plakar agent, which packages data from filesystems, object storage, SaaS platforms, and databases into self-contained, portable units called Klosets. These Klosets are encrypted end-to-end, immutable, verifiable, deduplicated, compressed, and fully browsable and searchable. Plakar likens Klosets to what containers did for compute: a standardized, portable abstraction that decouples data protection from infrastructure. Data can be backed up anywhere, stored anywhere (cloud, NAS, tape), and restored everywhere—without proprietary dependencies.

A key differentiator is the Plakar Vault Storage Protocol, which enables “trustless delegation.” Data is encrypted client-side, and encryption keys never leave the customer’s control. Cloud providers or MSPs can manage storage tiers, replication, and SLAs on opaque data blobs, enabling resilience-as-a-service without exposing sensitive data or creating centralized attack targets. This approach aims to reconcile compliance, sovereignty, and cost optimization.


Plakar recently announced Plakar Enterprise, a unified backup posture management platform delivered as a virtual appliance. It adds role-based access control, multi-user management, secret management integration, orchestration, SLA monitoring, compliance reporting, and centralized visibility across on-prem, cloud, and SaaS assets. The open-source community edition remains free, with guarantees against vendor lock-in: restores are always possible without a license.


Overall, Plakar positions itself not merely as another backup product, but as an open ecosystem and emerging standard for data resilience - designed for an era where recovery certainty, not just protection, determines business survival.

A fresh approach for a domain very often ignored or mis-addressed by companies and we expect a similar trajectory liek with see from famous usual suspects.

Share:

Monday, December 15, 2025

HyperBunker, a new dimension in cyber attack

HyperBunker, a newcomer in the data protection space, joined The IT Press Tour last week in Athens, Greece.

The company develops a hardware-based data protection solution designed to address one of the most critical failures in modern cybersecurity: the inability to reliably recover data after a ransomware attack. Built on insights from more than 50,000 real-world data recovery cases over 25 years, HyperBunker is positioned as a last-resort resilience layer for organizations whose connected defenses and cloud-based backups have already failed.


HyperBunker’s mission is to make recovery certain when everything else breaks down. Its vision is to establish a global standard for offline resilience, based on the principle that attackers can only compromise what they can reach. As ransomware attacks increase in scale, speed, and sophistication—accelerated further by AI-driven intrusion techniques—the presentation argues that traditional, credential-based and cloud-connected security models have become fundamentally unreliable. Industry data shows that most attacks remain undetected, that full domain compromise is often trivial, and that even well-funded security stacks frequently fail to stop modern ransomware variants.

The core problem HyperBunker addresses is not prevention, but guaranteed recovery. Organizations depend on a small set of trust-critical data—identity systems, financial records, operational configurations, regulatory archives, and customer or partner data. If these datasets are lost or corrupted, business continuity collapses regardless of how advanced other IT systems may be. HyperBunker is designed specifically to protect this “data that keeps organizations alive.”

Technically, HyperBunker is a fully offline, physical data vault. It has no credentials, no cloud APIs, and no external connectivity, making it unreachable by attackers. Data enters the system through a patented “butlering” unit that acts as a controlled airlock, enforcing double physical air-gapping between connected environments and the offline vault. Once inside, data is stored as immutable copies, with the most recent versions always preserved. Because the vault is never online, it is inherently resistant to ransomware, insider threats, credential theft, and even future quantum-based attacks.

HyperBunker is deliberately hardware-based, rejecting software-only or “logical air-gap” approaches that remain accessible through networks, credentials, or misconfigurations. The presentation contrasts this with cloud and software-defined backup systems, which may claim immutability or air-gapping but still expose attack paths. HyperBunker’s philosophy is simple: if attackers cannot see or reach the system, they cannot compromise it.

The solution is targeted at essential and highly regulated industries, including critical infrastructure, finance, healthcare, energy, manufacturing, and government. In these environments, downtime is not merely an IT inconvenience but a regulatory, safety, and operational failure. Validation includes more than 80 technical demonstrations, strong interest from insurers—most notably a listing by U.S. cyber insurer Cowbell—and early discussions with defense innovation organizations, all reinforcing the value of true offline recovery.


HyperBunker is delivered as Hardware-as-a-Service through a subscription model that includes the device, support, SLAs, and regular restore testing. This approach provides predictable costs while avoiding the unpredictable, often catastrophic financial impact of ransomware incidents. The company is backed by venture capital, has already delivered its first production units, and is scaling manufacturing and partner networks across Europe and beyond.

Overall, HyperBunker presents itself not as another cybersecurity tool, but as a governance-grade resilience layer—a final, untouchable vault that ensures organizations can recover when all connected systems fail.

We'll see how the company will penetrate the market in the coming months.

Share:

Friday, December 12, 2025

9LivesData continues the original NEC HydraStor product

9LivesData, a Polish storage company founded by veterans of large-scale enterprise storage R&D, participates this week to the 65th IT Press Tour in Athens, Greece. 

The firm introduced its flagship product high9stor, a TCO-optimized, scale-out enterprise secondary storage platform designed for backup and archival workloads. Led by CEO Cezary Dubnicki, formerly Head of Storage at NEC Labs Princeton, the company builds on more than 16 years of real-world experience delivering and supporting NEC HYDRAstor, one of the earliest and most scalable global-deduplication backup systems deployed at exabyte scale without data loss.


9LivesData positions high9stor squarely at the intersection of exponential data growth and rising infrastructure costs. The company targets the enterprise secondary storage market, where backup volumes continue to expand while organizations struggle with shrinking backup windows, slow restores, ransomware threats, and escalating total cost of ownership. The central promise of high9stor is to reduce TCO by around 20% today, with a roadmap toward 30% savings, without compromising performance, resiliency, or availability.

high9stor is a software-defined, scale-out backup storage system built on commodity hardware. Using dense 1U nodes with up to 240 TB per rack unit, it scales linearly to roughly 180 nodes and more than 40 PB of raw capacity. Capacity and performance grow together as nodes are added, avoiding the bottlenecks typical of scale-up architectures. The system employs inline global deduplication and compression, significantly reducing stored data volumes while also accelerating backup ingestion.

A core architectural differentiator is the use of distributed, multi-controller algorithms. Background operations such as space reclamation, rebalancing, and integrity checks are executed in parallel across all nodes, rather than by a single controller. This allows high9stor to reclaim capacity in hours instead of weeks, even at very high utilization levels. The platform is designed for non-stop operation, supporting online expansion, hardware refresh, and up to three generations of nodes in a single cluster, eliminating forklift upgrades.

High availability and durability are achieved through erasure coding, allowing the system to tolerate multiple disk or node failures with far lower capacity overhead than traditional replication. Integrated WORM (write-once, read-many) functionality, combined with object lock support and tight integration with leading backup applications, provides strong protection against ransomware and accidental deletion. WAN-optimized, dedup-aware replication enables efficient disaster recovery across sites.


Compatibility with existing backup ecosystems is a key focus. high9stor supports standard interfaces such as NFS, CIFS, and S3, as well as deep integrations with major backup vendors including Cohesity NetBackup (OST), Veeam, Commvault, and Nakivo. This allows enterprises to consolidate multiple backup streams into a single global deduplication pool while preserving application-specific optimizations.

From a business perspective, high9stor is sold as a software subscription priced by raw capacity per month, making costs transparent and predictable. The company targets large enterprises, financial institutions, telecoms, utilities, healthcare, media, and government organizations, with a particular focus on EMEA and Central Asia. Real-world case studies, including large financial institutions operating hundreds of nodes across multiple data centers, underline the platform’s maturity and operational stability.

Overall, 9LivesData presents high9stor as a next-generation backup storage platform that combines proven architectural principles, modern scale-out design, and aggressive TCO optimization—positioning it as a compelling alternative to traditional backup appliances and legacy scale-up systems in an era of relentless data growth.

We'll make some checks at different to measure progress of the team and product on the market in the coming months.

Share:

Wednesday, December 10, 2025

Ewigbyte, new project to preserve data over long term on glass

Ewigbyte joined The IT Press Tour this week in Athens, Greece. The company introduces its vision for a new paradigm in cold data storage-one that is secure, sovereign, and environmentally sustainable. The company argues that the digital age, driven by explosive data growth, artificial intelligence, and rising energy constraints, has reached a breaking point where traditional storage technologies can no longer scale economically or sustainably. Cold data—information that must be retained for long periods but is rarely accessed - has become the critical bottleneck in global data infrastructure.


ewigbyte frames the challenge through powerful macro trends. Data volumes are growing faster than enterprise storage production capacity, creating a projected supply gap of roughly 15 zettabytes by 2030, or around 50% shortfall. At the same time, storage costs are rising sharply, with hard drives and SSDs experiencing double-digit annual price increases. The industry’s dependence on HDDs, SSDs, and magnetic tape - technologies that are decades old and prone to failure, degradation, and environmental risk - makes the current trajectory unsustainable in terms of energy use, electronic waste, water consumption, and CO₂ emissions.

The company’s core proposition is to rethink cold storage from first principles. Instead of optimizing for density and write speed, ewigbyte prioritizes durability, security, and minimal climate impact. Its solution is based on writing data directly onto glass using photonic technology. Data blocks are "burned" into ultra-thin glass media with ultra-short pulse UV lasers and structured light modulators, without toxic coatings. The result is a write-once, immutable storage medium designed to last more than 10,000 years, resistant to heat, humidity, electromagnetic pulses, radiation, and cyber threats such as ransomware. Because the stored data requires no power to maintain, its operational climate impact is effectively zero.

ewigbyte emphasizes that glass ablation is not experimental science but a proven industrial process already used in other manufacturing contexts. The company’s innovation lies in its modified optical system and its ability to integrate laser writing, robotics, and physical data warehousing into a scalable storage service. Rather than conventional data centers, ewigbyte envisions physical data warehouses where glass-based storage cubes are catalogued, stored securely, and retrieved when needed.


From a market perspective, ewigbyte positions itself as a long-term successor to tape and HDD-based archives. As SSDs dominate hot and warm data tiers, cold data will increasingly migrate to optical and photonic media. The company outlines a staged go-to-market approach, beginning with paid pilot projects focused on WORN (write once, read never) use cases, followed by WORM archival data centers, and eventually scaling toward broader cold and warm data services as economies of scale are achieved. Key applications include compliance archives, backups, hyperscaler archives, and data sets with low read frequency but strict durability and sovereignty requirements.

The roadmap projects an MVP in 2026, the demonstration of the first dedicated archival data center by 2028, and large-scale operational facilities by 2029. Supported by an experienced founding team with legal, technical, and industry expertise, ewigbyte presents itself as a foundational technology company aiming to redefine how humanity preserves data for centuries - shifting cold storage from an energy-hungry liability into a permanent, sustainable asset.

We'll monitor carefully the progress made by the team as it is a key European initiative.

Share:

Thursday, November 27, 2025

65th Edition of The IT Press Tour in Athens, Greece

The IT Press Tour, a media event launched in June 2010, announced participating companies for the 65th edition scheduled December 9 and 10 in Athens, Greece.

During this edition, the press group will meet:
  • 9LivesData, the developer of NEC Hydrastor, introducing a new product named high9stor compatible with HYDRAstor,
  • Enakta Labs, a recognized expert team on DAOS,
  • Ewigbyte, an innovator around long-term data preservation on glass,
  • HyperBunker, an interesting dual air-gap model,
  • Plakar, a fast growing backup and recovery open source software,
  • and Severalnines, a reference in DBaaS.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.
Share: