Tuesday, January 02, 2024

Recap of the 52nd edition of The IT Press Tour

Initially posted on StorageNewsletter 15/12/2023
 
The 52nd edition of The IT Press Tour happened recently in Madrid, Spain and it was an opportunity to meet European and American companies, with some famous names already met but also newcomers so globally a good mix of people bringing innovations and new ways to solve and address new IT and storage challenges. During this edition dedicated to cloud, IT infrastructure, networking, security, data management and storage, we met DataCore, Disk Archive, Inspeere, Tiger Technology, XenData and ZettaScale Technology.

DataCore
The team has chosen the event to announce 2 major developments for SANsymphony and Swarm. At the same time, a company update was necessary as the positioning continues to evolve at a rapid pace with acquisitions and active presence in historical adjacent storage domains. It means solutions for the core, edge and cloud environments, with some similar challenges but also radically different ones from primary to secondary storage areas.

DataCore confirms its financial growth and robustness with 14 consecutive years of profitability, not so common in the storage industry. 30% of ARR growth is delivered with 99% recurring revenue. To illustrate this, the southern European region led by Pierre Aguerreberry, signed 201 new customers in the last few months fueled by a loyal channel partners network and a significant product portfolio expansion. As already mentioned, the management team has chosen to go beyond its comfort zone with object and Kubernetes storage solutions plus more recently AI extensions to feed the entire line and even the dedicated business unit, named Perifery, targeting media & entertainment IT and storage needs. This strategy feeds a cross/up- sell model that fuels partners with new products selling to a strong installed base.

First, SANsymphony, a reference in storage virtualization called for several years software-defined storage, will support NVMe over TCP and FC, improve snapshot and CDP rollback with compression, provide extensions for VMware with better vCenter integration and deliver adaptive data placement (ADP) as a new key capability. This core feature optimizes primary storage performance, QoS and cost with auto-tiering and inline deduplication and compression. The block access layer continuously captures and accumulates access info for each data block and thus decides where to place the block within the storage pool. It helps to adopt the right placement decision between 2 blocks accessed at the same time but one of these has been also actively touched previously changing the “temperature” of the block.

On the Swarm side, the main news is the single server approach in fact the containerization of the object storage software orchestrated with Kubernetes. This iteration fits the edge strategy to offer a ready to use and simple S3 storage for relatively small configurations under 100TB. It also means that Swarm can be deployed in different modes now, pure Swarm with clusters on multiple sites potentially but also as smaller configs building a real dispersed network federated by Kubernetes. Other improvements are S3 object locking for additional backup software in fact more a validation and soon object services to automate processing workflows.

Last info regarding both products which will also receive some AI oriented features, AIops for SANsymphony and object services for Swarm.


Disk Archive
Founded in the UK in 2008, Disk Archive is self funded and profitable supporting 450+ customers. The company has designed a cold data storage platform to address long term data archiving needs.

The product name ALTO stands for Alternative to LTO, and clearly promotes the usage of HDDs rather than LTO tapes. ALTO is well adopted in media & entertainment but also in oil and gas and other domains. Alan Hoggarth, CEO and founder, claims to deliver a lower TCO than tapes and tape libraries based solutions with similar capacity and retention times.

One of the dimensions to reduce cost is related to the energy bill. In other words, as an active (powered) media, how to manage the power of HDDs over 10 or 20 years. It’s impossible, not to say stupid, to let the entire disk array up and running over that period of time. You get the idea, Disk Archive leverages the MAID concept – Massive Array of Idle Disks – highly promoted by Copan Systems in the mid 2000’s or later by Nexsan with Auto-MAID. Different iterations have been made on this MAID idea. MAID projects different effects such as a longer life for HDDs proven by the return of experience of Disk Archive and air gap and vault. The team has seen 15 years of lifetime and counting for HDDs with systems deployed in the early days of the company. Globally the power consumption drops to less than 210W per PB.

Leveraging standard software and components, Disk Archive belongs to the SDS category delivered as a couple of hardware and software. Each machine is a 4U chassis with 60 HDDs delivering 1440TB with 24TB disks. Each primary chassis runs CentOS and can manage up to 10 expansion enclosures. A smaller model exists with 24 HDDs slots. The company sells empty systems and users have the choice to pick any 2.5″ or 3.5″ HDDs of their choice or even SSDs. To allow MAID to be effective, it’s important to understand that it is anti-productive to group or unify them in logical volumes or LUNs with logical volume managers or RAID thus creating dependencies on their state. Instead, it has been chosen to manage disks individually with a disk file system on each, here ext4. On the access side, the ALTO node exposes an API and a SMB share via a gateway mode.

A file is entirely written at least 2 times, not segmented at all, to 2 disks in a chassis or across chassis if one of multiple systems are deployed. One copy is also possible if another copy is available outside of Disk Archive managed perimeter. Immediately it means that the maximum file size is limited by the ext4 size on the disk partition but today with high capacity HDDs this model works perfectly largely enough in the vast majority of cases.


Inspeere

Based in France, it was founded in 2019 and raised recently €600,000 to sustain its ambition. The mission is to offer a new way to protect data against cyber threats, data loss or more globally system failure with an innovative backup solution dedicated to edge IT. This product relies on a mix of hardware with the Datis box, a x86 server running Linux equipped with OpenZFS, and a data orchestration and management software layer.

In detail, the team has designed a P2P architecture that links a data source to N similar targets. This dispersed network of machines are all peers, so the company name, and contributes to the robustness of the solution. The source machine snaps, compresses, encrypts, splits, encodes and distributes data chunks to remote systems. Inspeere has developed this data distribution based on Reed Solomon erasure coding (EC). It’s key to notice that data is encrypted at the source before the chunking and distribution phases as the EC model used here is systematic.

Also, the EC supports 32+16 on the paper, meaning a total of 48 peers supporting up to 16 failures or unavailable machines. OpenZFS is paramount here with of course the local data integrity but above all read-only snapshots and replication mechanism. ZFS is a disk file system, so pay attention also to the philosophy of its utilization, Inspeere doesn’t offer a distributed ZFS nor a scale-out one but rather really a way to glue independent ZFS based servers. All Datis entities are autonomous, just connected and maintaining a special network usage.

Inspeere targets SMB entities and the team has realized that 4+2 or 6+2 is largely enough and matches deployments. As Datis boxes are not volatile systems, their availability is high and allow this reduced number of parity chunks. These systems operate as local file servers within each company, serving “classic” data and acting as the backup repository for clients via backup software like Acronis, Atempo, Nakivo, Veeam, Bacula… or others but even tools or OS commands. All Datis boxes store all data versions and protect themselves with the remote peers reaching a new level of data durability.

This approach prevents or delays the purchase of secondary storage and participates in a very efficient data protection TCO and therefore contributes positively to the green and ESG corporate objectives. The solution is obviously certified GDPR and NIS2.

Now, again nothing new is all about execution probably via specific partners targeting vertical needs in some activities.


Tiger Technology
The Bulgarian company has chosen a data resiliency angle addressing the range of data availability and disaster recovery in a hybrid world. Founded 18 years ago, Tiger Technology, today with 70+ employees, is a well known player in file storage coming from a pure on-premises world to hybrid. And the result is significant with a footprint of 11k+ customers essentially in rich content like media and entertainment, surveillance, healthcare but also generic IT.

This market adoption is fueled by Tiger Bridge, acting as an advanced windows based file storage gateway. Users don’t feel any difference between local or cloud files, this is the result of a pretty unique Windows and NTFS integration and expertise.

Hybrid cloud is a reality coming from users who fully jumped into the cloud and started some repatriation to finally adopt a mix configuration and the other side with incremental move to the cloud for some data, workloads and vertical usages. The final landing zone is this hybrid mode with different balanced points for various industries, needs and configurations. Users drive this adoption based on quality of services, flexibility, complexity and above all TCO.

Tiger has promoted for quite some time a model called on-premises first (OPF) with a progressive controlled cloud extension coupled seamlessly to local production sites. The Data gravity dimension is key here with some immediate reality in some applications as we live in a digital world flooded with a real digital data deluge.

Key for edge applications, Tiger Technology identified the need to integrate Tiger Bridge with key vertical needs such as surveillance, healthcare and a few others. And to sustain that strategy and new areas of growth, the management has decided to create new business entities like Tiger Surveillance dedicated to that business and industry segment. In that domain, massive rich media files are captured all day and require local space for constant camera feeds, rapid problem detection aligned with local regulations and quality of service objectives but also extension to cloud object storage for the bulk of the volume.

The company accelerates on this and signs deals after deals with cities, airports and similar entities. For such deployments, data resiliency complements file access methods with DR, CDP and ransomware protection and illustrates why Tiger Bridge is a reference in the domain. The product supports Active/Passive or Active/Active architectures aligned with application requirements and site constraints. In that A/A mode configured locally, mix or in cloud only, airports reach new levels of resiliency critical for daily operations in current life climate.

We expect Tiger to continue in this vertical integration to address IT operations challenges as Tiger Bridge represents a universal answer.


XenData
Launched more than 2 decades ago 9/11/2001, what a date, in the UK by Philip Storey, CEO, and Mark Broatbent, CTO, XenData plays in the active archive data storage category. The mission is to offer a scalable secondary storage platform dedicated to media and entertainment but also for similar needs in other segments. The original idea was simple as it comes from the necessity to write to archive thus tape like applications write to disk. Self-funded, the original team started to design a solution that is today largely adopted with 1500+ installations worldwide. And the team has found its market, the solution fits media & entertainment needs, a huge number of users of removable media like tape but also archive lovers. The company also understood that the success comes with key partnerships with some players already deployed, used and trusted that finally validate a global solution for end-users.

So the concept is to glue a LTO tape library with a disk array both connected to a server and globally this stack operates as an archive destination. But active archive really means that there is no need for external help to access and retrieve data, operations are seamless and available for any users via simple integrated access methods. This is why we see network shares or drive letters on Windows server. The other key aspect is that the server coupled with disk acts as a cache for ingest and retrieve operations and therefore make things more fluid and faster. And obviously for frequently accessed files, the disk zone keeps them longer before reaching tapes. This is covered by the X-Series product line.

Starting as a single node, the configuration can be extended with a multi-node model connected with external disk arrays, tape libraries plus of course cloud. The team has validated Wasabi, Backblaze, Seagate Lyve, and 2 giants, obviously Azure and AWS.

Beyond this device based solution, the team has developed a pure software solution named Cloud File Gateway to sync archiving sites or XenData instances globally.

The most recent product iteration is the E-Series being an object storage. Starting with 280TB and able to grow up to 1.12PB with 4 nodes, the solution is essentially a S3 storage entity confirming what we see on the market that object storage moved from a real distinct architecture to just an interface in favor of users having more flexible choices. Same file based content can be accessed via file system or HTTP methods.

The team offers a preview of its media browser coming soon that allows a rapid access to media content in any resolution that complements partners’ solution.

This XenData approach offers a real interesting model with an integration of multiple storage technologies coupled with cloud with seamless tiering or migration between all these levels.


ZettaScale Technology
Founded in 2022 as a spinout from Adlink Technology, ZettaScale is a pure infrastructure software company developing middleware to set new standards in communication, compute and storage for humans and machines anywhere, at any scale.

The challenge resides in the mix of entities that need to collaborate together in the current complex world with very dispersed entities. To enable this, it is paramount to consider a specific dedicated exchange protocol like the role of IP had and has in the Internet birth, design, growth and ubiquity adoption and presence. Again, this need appears in IoT, edge, automotive, robotics and other advanced devices that need to communicate, exchange data, and potentially process data.

And to be precise on the automotive aspect, the complexity comes from the software integration with huge immediate challenges with the need to process, exchange and store a fast growing data volume. The other key fundamental design requirement is to support the dispersed and decentralized aspect of environments to cover. This is a big change from the classic centrally managed approach not aligned with the new world we live in. We rely today on old protocols with wireless and scalability difficulties plus the dimension of energy.

A solution has been developed, Zenoh, that provides a series of key characteristics and properties such as unification of data in motion, data at rest and computations from very small entities like microcontrollers to data centers. It is an official standard protocol with pending ISO 26262 ASIL D certification. The other core element is the location independence supporting distributed queries. Imagine moving vehicles in cities, the data exchange must be fast, resilient and accurate coming from any vehicles “interacting” with each other and after a car crash, some of them could disappear and be unreachable. Zenoh was built for that and represents the state of the art in the domain. It is written in Rust and offers native libraries and API bindings supporting a wide variety of languages and network technologies with Unix sockets, shared memory, TCP/IP… and even bluetooth or serial. It runs on almost everything i.e Linux, Windows, MacOS or QNX leveraging any topology. Zenoh promotes an universal model with publish/subscribe, remote computation and storage with file system, MinIO, AWS S3, RocksDB or InfluxDB.

ZettaScale has unveiled recently its Zenoh platform that significantly boost the adoption and deployment of Zenoh based projects in various domains: robotics, submarine vessels, heavy mining, drones, logistics, and of course automotive plus we already saw some very promising demonstrations in some of these areas. It also triggered what is called Software Defined Vehicle and served as an open communication backbone. Obviously plenty of oems are interested in this technology that demonstrates a big leap in the category.

Share:

0 commentaires: