February 15, 2021

Fujifilm iterates on Object Archive

Fujifilm, a key player in secondary storage, introduced Object Archive (OA) version 2 during the recent IT Press Tour. Object Archive is the name used in north America and elsewhere the product is promoted as Software-Defined Tape. This difference could explained some difficulties for prospects and partners to approach the solution as they need to learn this first.

Let's refresh our readers on Object Archive. In just a few words we can say that the product is an on-premises object storage solution exposing S3 connected to tape libraries to address long term archiving needs leveraging tape as a passive media.

OA operates as a gateway, cache, S3 and tape server and can be aligned with what VTL did in the past in the industry. VTL is dead or almost dead as it was a resistance to the wave of disk-based deduplication model for backup. VTL illustrated perfectly the shift needed for tape to archive and above all deep archive.

OA supports tape libraries equipped of LTO-7, LTO-8, IBM TS1160 and soon LTO-9.

As 1 copy of data is not enough even if tape is a reliable media, 2 copies are a minimum and are included in the Fujifilm OA subscription. We expect the tape server engine will offer RAIT or ECoT (Erasure Coding on Tape) to reduce tape consumption, increase efficiency and tolerates more errors. Of course it requires at least 3 tape drives.

The product also introduced a new tape format named Open Tape Format aka OTFormat to better support tape and align objects to the media.

By default, the air gap is obvious with such model as tape could be physically separated and never touched by a propagating ransomware.

This version 2 illustrates that the market adoption of OA requires strong partnerships especially for S3 to S3 to tape and Fujifilm picks leaders like Cloudian, Caringo or NetApp StorageGRID. These players represent strong vendors shown in the last Coldago Map 2020 for Object Storage. At the same time, the team recognize they also need to be compatible and integrated with horizontal data management and they pick Starfish and Tiger Technology. I'm surprised to not see Hammerspace, Komprise, Point Software and Systems, StrongBox Data, Data Dynamics, QStar or Versity. Especially StrongBox Data and QStar as they're listed on the Fujifilm Data Management Solutions for North America as you can see on this image below.


This is strange but I understand it's a work in progress at least we hope. These partnerships are key to penetrate the market as many of them are already deployed and OA must be validated with all of these.

OA targets vertical use cases essentially around deep archive and/or active archive. The company keeps the same pricing model introduced with version 1.

Fujifilm confirms that object storage occupies a major role today, especially its S3 access method, and tape as long term storage residence. We'll see how this solution will be adopted by the market.

Share:

February 12, 2021

February 10, 2021

HYCU jumps into Office 365 data protection

HYCU, the emerging leader in modern backup, chose again The IT Press Tour to announce a new product iteration, key for the company and globally for enterprises.

The company is well known for its rapid growth, present in 70+ countries with 320+ partners fueled by its MultiCloud SaaS Backup and Recovery solution and historically for its Nutanix data protection product. It is adopted by more than 2000 customers, runs on-premises or in-the-cloud and is considered by users also to control their journey to the cloud.

The cornerstone product is HYCU Protégé, the complete MultiCloud Data Management supported in various environments with key attributes: on-premises, VMware, Nutanix, AWS, GCP and Azure, agent-less, application aware with databases such Oracle SQLServer or SAP HANA, multi-tenant, dynamic auto-scaling and real as-a-service philosophy.

The new product is HYCU Protégé for O365 and protects the product Office 365 family with Outlook, OneDrive, Teams, SharePoint and OneNote and Office 365 itself with classic Excel, Word... This is a first differentiator against competition and is promoted as Total 360˚ Protection of O365 and everything is transparent for users.

The protection leverages the journal feature to provide a near CDP capability and offers a high level of granularity. It adds an advanced search feature to navigate within protected data fields. Data are encrypted in-transit and at-rest with industry standard methods, target also is aligned with compliance regulations.

On a second part, HYCU team shared some update on HYCU SAP HANA DR for GCP with new capabilities around scheduling, snapshot and recovery.

On the Nutanix side and especially related to Mine, HYCU supports object storage, erasure coding and deduped storage targets.

Another key topic for the company is the ransomware threat addressed by WORM-aware targets, based on S3 for instance like MinIO integrated by Nutanix as the Objects data service and method. It protects backup images against errors or special actions and supports legal holds and, as said, encryption obviously.

And to conclude, the team reaffirmed the support of bare-metal or physical servers. This is a key element to be considered as an enterprise data protection solution.

Share:

February 5, 2021

February 2, 2021

OwnBackup joins the club

OwnBackup, the leader in Salesforce backup, just raised a new round of $167.5M - Series D - for a total of $267.5M. It represents a huge achievement for the company just a few months after the last round of $50M in July 2020.

With a valuation of $1.4B this round confirms the unique positioning and role of OwnBackup in the industry and invites them to the Storage Unicorn Club published twice a year by Coldago Research. The company is joining this group and will be listed in the coming report next June.

The interesting part is Salesforce continues to invest in the company and it will make sense for them to acquire the company and stop the game as others appears to be anecdotes. At the same time Sapphire Ventures is investing as well.

Now the company has to continue its development effort as users expect the solution to cover other SaaS enterprise applications.

We have met 2 times OwnBackup for The IT Press Tour and got impressed by the vision, strategy, product and directions.



Share:

January 28, 2021

Datameer partners with Google Cloud

Datameer, a historical leader in data manipulation, continues its product development and market penetration, this time with an official partnership with Google Cloud.

Enterprises had difficulties to leverage the cloud especially if they use complex and large data sets. They require some validated tools to enable rapidly this new cloud computing model and to maintain business attractiveness. Google as a cloud provider wishes to swallow all on-premises data and the choice of Datameer was and is an obvious pick, the company being a key recognized player in the domain. The firm was founded with the goal of facilitating big data and in particular Hadoop integration, analyze and visualization more than 10 years.

This need requires real expertise in enterprises data warehouses migration to feed Google Dataproc and BigQuery and it's delivered by Datameer Spectrum. The product supports as well AWS and Azure spanning all environments for enterprises and especially large ones.

So what is in details Datameer Spectrum?

  1. It's a super ETL data pipelining that couples various data sources and destinations, supporting multiple formats across hybrid clouds. It is as simply as a using a spreadsheet known by everyone. Data are not limited to structured format and Spectrum supports semi or unstructured ones.
  2. it operates under very secured methods with strong authentication, authorization, support of SAML and LDAP/Active Directory and also encryption.
  3. and finally it delivers its functions very fast.

Companies choosing Datameer Spectrum gains in agility obtaining faster results in their decision process aggregating all data in one large data warehouse hosted in that case within the Google Cloud.

Share:

January 22, 2021

January 20, 2021

The 38th IT Press Tour is coming very soon now

The coming 38th IT Press Tour will be again delivered as a virtual edition for obvious circumstances. As companies lost expos and conferences opportunities to reach press, the tour confirms once again its unique role in the industry making an unique bridge between companies and press.

The edition organized next week will be the perfect opportunity to meet:
I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle @CDP_FST and journalists' respective handle. It will rock again.
Share:

January 19, 2021

OVHcloud believes in tape

OVHcloud, European cloud leader, just announced a new Storage-as-Service offering based on IBM tape coupled with Atempo for data management software. The tape chosen is the IBM Enterprise 3592 and will be managed with Atempo Miria. As a front-end, OVHcloud will integrate a 6+3 erasure coding scheme. I don't know who wrote the press release but people use replication term when they speak about EC. Let me precise things for OVHcloud people, replication creates multiple replica i.e similar blocks on multiple locations meaning you obtain x2, x3 data and erasure coding is about splitting data and adding parities but in that case there is only one copy for a final overhead ratio of 1.2, 1.3, 1.4...

The second point is related to the term used. This announce confirms the role of tape but I'm surprised to read Storage- instead of Tape-as-a-Service. Probably because OVHcloud is not at all a storage specialist but more a hosting company and continues to confuse the market. With ransomware being a dominant threat, tape and air gap make really sense.

Tape continues to evolve even if last LTO-9 characteristics have disappointed the market but recent Fujifilm and IBM research prove some new optimism for large tape capacity with prototype of 580TB per cartridge.

It is true that tape has a very low TCO as a passive media but users have to consider as well time to access data, also known as Recovery Time Objective aka RTO, and drives ratio versus tapes number to avoid long waiting time as tape drives are the critical resources. But when you compare include tape library and tape drive and not only tape, add also redundancy factor and compare with disk array as a whole with energy dimension over 5 or 10 years. On the other hand it would be crazy to keep online 10 years an archive disk array without any energy control mechanism. Tape has morphed into archive even deep archive media of choice and coupled with a catalog or content index make adoption simpler.

This is interesting as Scaleway, an other French cloud provider, promotes a disk-based deep archive system without any tape. This service spin down and stop drives to align energy savings with tape. Good, the battle continues... And we wait archive on flash as footprint is fantastic, restart is super fast, we just wait cost to go down.

Share:

December 28, 2020

Recap of 37th IT Press Tour

This report was published for the 1st time on StorageNewsletter December 23rd.

The 37th edition of The IT Press Tour organized virtually was the opportunity for corporate and strategy updates and also several product launches from Atempo, Data Dynamics, DDN, iXsystems, Kaseya, MinIO, Robin.IO, SoftIron, StorCentric and StorPool.

Being a reference for multiple decades and continue to innovate is not easy. For an European company, it’s probably even more complex but the firm belongs to the small group of players who is able to continue to unveil new features and product capabilities illustrated by a good market adoption and several key partnerships. This European identity is paramount in the current compliance climate, data regulations and directives pressure. Atempo actively promotes 3 products: Tina as the server and enterprise data protection solution, Lina for end-users machines and endpoints, and finally Miria for large unstructured data environments.

The first refresh is related to Tina who receives a new intuitive UI, common to Lina and Miria. The product protects now OpenStack VMs in an agent-less mode, Office 365, adds backup to cloud with S3 before coupling soon its de-dupe engine and D2D2C. Tina will receive the support of Nutanix AVH, Huawei FusionCompute and KVM with Salesforce and other SaaS protection. We hope that Atempo won’t choose a toy for Salesforce if the direction to embed partner technology is adopted. Nutanix signed an OEM deal with HYCU.

Lina 2020 continues also to evolve as the workstations, desktop and laptop protection choice for many end-users. With CDP, encryption and de-dupe with flexible deployment model like on-premise Lina server, the product can protect dozens of thousands of machines. Well aligned with MSP needs, the product is now available on OVHcloud marketplace and bills starts at €8/50GB/month (without VAT). 

The third product of the family, Miria, also received a refresh supporting Lustre with FastScan and True Image Recovery. Miria for Migration is already adopted by several partners as their preferred solution, this is the case of DDN, Huawei, Nutanix and Qumulo. Cloud is a key topic for Atempo and as Tina, Miria supports S3 in any combination for migration: local to cloud, cloud to cloud, cloud to local and also backup or archive object to tape. GCP and Azure will be available in 1Q21, and we continue to wonder why Swift is needed as OpenStack reduce drastically its scope and market presence. The hype is over guys. In 2021, Miria should receive some analytics module, DR for large environments and new HSM capabilities.



Unstructured data create nightmares for administrators while being flooded by the deluge of data. This is a real day to day challenge for storage administrators as they need to continue to serve users with optimized storage and data services. This effort can be summarized with several examples such refresh of systems, data replication for data distribution, collaboration or protection, optimization of cost and storage with effective data evacuation to the right storage entities. StorageX, now in 8.3 release, is for more than a decade a reference in that domain extending with S3 storage being represented by on-premises or cloud object storage but also Azure Blob, leveraging ElasticSearch to boost file and metadata search and indexing.

The important notion promoted by Data Dynamics is the platform. In fact, this is the ultimate step following the classic Tool – Product – Platform adoption behavior where administrators first pick a tool to address limited need in time and by task coverage. The direct cost is attractive but the need of multiplying tools creates complexity and therefore increase exponentially human and technical cost. This transient usage evolves with the products adoption being used longer but still for limited operations. This ultimate level is the platform with a resident model used by many people for many usages all the time giving a value over time and by the features list it provides. Here StorageX is exactly this, it is a residing solution used everyday for different usages critical to maintain up and running storage and data management services to support business operations.

Data Dynamics also strengthens 2 key partnerships (NetApp and Lenovo) who extend StorageX presence and market penetration. It also validates the product, features and use cases with many deployments. it started an initiative with hyperscalers that demonstrate scalability of the solution at wide and large environments. This success for a few years is illustrated regular employees increase and double digit revenue growth. 



The company is the largest private storage company, a status confirmed and more visible with their last few acquisitions such Nexenta, IntelliFlash and Tintri. The company has shipped 10+EB since its inception and generates roughly $600 million annually.

The effort started a few years ago to promote Lustre-based storage infrastructure is gaining maturity and traction as ExaScaler represents now 4x the sales of DDN At Scale business vs. the rest of the HPC product line. It means that the firm is replacing Spectrum Scale-based systems and deploys ExaScaler 5th generation at a rapid pace. This 5.2 release introduces following features: global snapshot, always a challenge for distributed environments, NAS and S3 support in addition to Posix client, mixed storage with intelligent data placement multi-tenancy and security features.

As HPC and enterprise storage and needs converge, DDN continues to enhance the integration of its enterprise product line under the Tintri brand covering VMstore and IntelliFlash fueled by Nexenta software for the latter.

VMstore demonstrated some great progress in 2020 with new features, new appliance and the support of SQL databases. Seen as a pioneer of VMware storage with its dedicated NFS datastore product a few years ago, Tintri engineering tries to deliver similar solution for SQL Server databases and soon Oracle with NVMe support.

The second key product of the Tintri business unit is IntelliFlash positioned as a scalable unified storage products exposing block, file and objects coupling software from IntelliFlash and Nexenta. The 4.0 release will see soon the full stack of IntelliFlash as a real SDS instance.

With ExaScaler, Tintri, Nexenta and IntelliFlash, DDN is a leader in file storage confirmed by Coldago Research Map 2020 for File Storage.



The company behind TrueNAS accelerates its products developments in several areas. Introduced in June 2019, the team has extended TrueCommand with the 2.0 release being a true cloud flavor. Named TrueCommand Cloud, it is now possible to deploy it on-premise as VM or as a Docker container or even on AWS and it controls any kind of TrueNAS environments – core, enterprise or scale – of any size. It offers security features that allow real multi-tenancy usages aligned with MSPs needs.

TrueNAS is now at 12.0 level coupling OpenZFS 2.0 for 30% more performance. M60 is the new appliance presenting the fastest ZFS storage with 20GB/s and 1 million IO/s with just one chassis. A new SOHO server appears as well the Mini X+ and show a wide product offering from small business entities to very large corporations.

The teams has also insisted on a strategic product TrueNAS Scale they unveiled recently. The solution invites iXsystems to play with the big guys, at scale in data centers, and still relying on open source. This new software iteration confirms the SDS strategy of the company on any hardware for any use cases from really small one to large consolidated usages on the same platform what they call Storage Freedom. Being an unified approach with block, file and object interfaces, this open HCI model relies on a scale-out ZFS design and various data redundancy schemas, virtualization with KVM and container with Docker and Kubernetes. In the next few weeks, TrueNAS Scale will run in AWS on EC2 or VMs confirming the hybrid direction.

Business is also good with 25% annual growth and 1,000+ new customers per year supported by 200+ channel partners worldwide. 75% of the business comes from North America, 15% from Western Europe, 6% APAC, 2% Africa and 2% Latam, delivered essentially with M and X Series and TrueNAS Enterprise. The second revenue contributor is Germany.



Targeting SMBs and MSPs, it continues to grow fast even during the Covid-19 period. Drivers are multiple but the digital transformation of SMB is a real catalyst sustained by MSPs. IT Complete, the Kaseya platform, sees more products and services integrations and as the company acquired 8 entities during the last 5 years.

On the business side, Kaseya generates roughly $300 million in 2020 with 35,000 clients worldwide adding 1,200 new ones every quarter. And this performance is remarkable with a GDP down by 5% at least but some IT spending in increase by 3%. In this uncertain times, security requirement became certain and Kaseya saw a natural growth in its service utilization. At the same time, the team added compliance that strengthened product adoption.

As pressure increased last few months, SMBs chose a consolidated IT management approach with IT Complete and avoid distinct product selection. This implicit complexity has been avoided. The platform embeds various services and especially relies on Unitrends as Unified Backup and Spanning data management capabilities for Office 365 and more generic data protection needs. Fred Voccola, CEO of Kaseya, has insisted on the integration of various services, products coming from acquisitions, into IT Complete that finally masks complexity delivering more value from one central unified console and dashboard. The key element of this integration is the Kaseya Integration Hub that helps translates data formats, types, workflows… and connects each part of the platform. It also boosts each integration as finally each product is not integrated with others exploding possible combinations but with the integration Hub, the common element across all services. As of today IT Complete has 78 workflow integrations growing at 6 to 8 each quarter. Integration decreases risks and costs and improves IT efficiency freeing some time to IT people to dedicate on new projects.



Founded in 2014, it has gained credibility and maturity to appear as a leader in object storage, we should say S3 storage.

For few quarters, the company promotes itself as the standard storage for Kubernetes and especially as a key companion for VMware via its vSAN Data Persistence platform. The solution is positioned as the storage component of a hybrid cloud strategy supported in all kind of deployments.

Free by nature as an open source software, MinIO leverages Subnet to commercialize, contract, bill and support its storage service. When $1.2 million per year is reached, clients receive a minimum of 10PB of usage for standard level or 5PB for enterprise level as this one is more comprehensive.

The team has also develops new UI, named MinIO console, that simplifies all tasks associated. On the resiliency aspect, the engine delivers mechanisms both in term of data protection with replication with server side bucket copy and erasure coding but also with WORM and object locking and the team even recently added a tiering mode to evacuate large or inactive objects. Users can build a hierarchy of clusters or consider the public cloud as the target.

Today MinIO has more than 15 million instances running being by far the #1 S3 storage engine on the planet in all flavors i.e on-premise and in AWS, Azure or GCP clouds. This is accomplished by tons of integrations such as Cisco, Datera, Humio, iXsystems, MapR now owned by HPE, McKesson, Nutanix, Pavilion Data Systems, Pivotal, Portworx, Qumulo, Robin.IO, Splunk, Ugloo and VMware to name a few. The MinIO tsunami is real and has a significant impact in the industry.

The company is attractive both as a product but also as a firm, and we estimate at least a multiple of 20 overs VC money which means roughly $500 million to acquire it. The candidate list is now pretty long and the founding team and its board of directors have several choices among Dell, HPE, IBM, Red Hat, VMware and even cloud giants. We’ll see.


Kubernetes storage is hot and for sure. The company belongs to the small group of players who understands the storage and data management aspect of it. The team has developed a rich data services layer that supported any flavor of Kubernetes – K8S, OpenShift, Anthos, IBM, GKE, AKS and EKS – deployed on-premises or in the cloud.

With recent moves like DataCore investing in MayaData leveraging also the technology, Portworx acquired by Pure Storage and Kasten by Veeam, Robin must accelerate and above all must gain visibility as the company is less visible than the product. It confirms the 2 main angles of developments with data management and especially backup but also the storage service with the persistence volume requirement for stateful applications.

The combined approach represents a key differentiator vs. the competition and thus Robin.IO recently unveiled the Express edition, a full features Cloud Native Storage offering, only limited to 5 nodes and 5TB globally. It is the promotion vehicle to invite devops communities to test, try and then adopt the product with the enterprise edition, which is similar but without any limitations.

One of the value of firm’s storage service is the performance as it is well aligned with bare metal level addressing immediately the potential degradation of such complex stack. Robin.IO charges the product per node hour usage aligned with the distributed nature of the cloud with a reasonable time granularity. The second angle is the presence on the Red Hat Marketplace to be associated with OpenShift, a best-seller and super visible open source container application platform based also on Kubernetes. Last point, Robin.IO is obviously in the radar of larger vendors.



Seen as a real ambassador of Ceph, SoftIron has made some significant progress in 2020. First they raised a new round of $34 million – series B – that have accelerated the development of their product lines with dedicated optimized open source-based storage appliance.

The company recently opened 2 new offices in Berlin, Germany, and San Diego, CA, and doubles its headcount with the capability to produce systems in different regions close to their deployments sites and reduce supply chains with more predictability.

In fact, the team experts in hardware design and open source software and especially in Ceph. The company embraces fully the SDS model but runs it on optimized COTS for specific use cases. One of these optimizations is the energy as data center energy bills represent a significant line. To illustrate this let’s compare a 1U HyperDrive with 120TB that consumes 146W instead of a Supermicro 1U chassis with 144TB at 800W. And do the math for 100 appliances running 24/24 over 5 years.

Among various key recruitments, SoftIron hired Craig Chadwell as product strategy chief to boost product roadmap and developments directions. They understand that adoption needs an ecosystem and partners’ software validation is a must. Among them, we see iRODS, an open source data management platform, and a commercial software Veeam with the object feature validation. Whatever is the organization we finally find most of the time commercial software for backup and archiving functions. To extend its market footprint, the company has decided to address new needs around specific applications where optimized Ceph is a good fit. Among there are MAM, global collaboration, AI/BI ETL, commercial and scientific HPC or OLTP.

Discussing with Schalk Van der Merwe, CTO of the Hut Group, a giant e-commerce service, we learned that all the critical business activity relies on 3 replicated OpenStack zones fueled by Ceph for the storage layer.



The company is a unique one with several brands at its catalog with Drobo, Nexsan, Retrospect, Vexata and a recent one. Mihir Shah, CEO, and his team continues its external growth targeting bargains. It was the case for Vexata and Nexsan and more recently with the past famous company Violin Memory (sorry Systems) as the organization had so many lives. Historically positioned as a high-end flash array, Violin occupies now the flash mid-range segment with Nexsan, its capacity oriented companion. In terms of developments, NVMe over TCP should arrive in the coming months to facilitate adoption in already deployed TCP networks.

The other direction taken seriously by StorCentric is confirmed by 2 realities, first the SDS approach and more generally the software model with data management illustrated to day Restrospect and Data Mobility Suite (DMS). The latter is an important piece in the company strategy as a feed layer across multiple StorCentric products but also a way to glue and transfer data from competitors’ offerings.

Except Unity and Drobo models being an unified storage solution, StorCentric is focused on block storage. The company is monitoring the market to offer some data analytics service for the storage and potentially an object storage service on top of existing product or as an independent one. Some ideas exist on the market with open source S3 engine such MinIO already embedded and offered by several players mentioned above.

We smell some news during next few quarters but it can even arrive faster like the Vexata deal and surprised the market like Violin. Keep your eyes open. 


The European leader in block SDS continues to penetrate the cloud providers segment with a rich approach. The first time we wrote about the company was in 2014 after we discovered the company in 2013.

StorPool is a real SDS if you consider this definition: “SDS transforms a rack of (classic and standard) servers (with internal disks) into a large storage farm. You can find block, file or object as it is essentially how the storage entity is exposed outside”. In other words, take 3 servers, install Linux and StorPool software and you get an high performance and resilient storage array for primary storage needs equipped with HDDs, SSDs, or a mix.

In term of features set, StorPool leverages its own efficient on-disk format with copy-on-write, logical volumes are exposed with iSCSI or via a dedicated Linux client, controller could be active/active and volumes could be shared across multiple applications hosts, volumes could be attached to bare-metal instance or KVM, Kubernetes, VMware and Hyper-V, protection is delivered with 3x replication (no plan for erasure coding yet), snapshots and asynchronous remote copy, “Run from backup” and backup streaming mode are included, a full API REST is available with also an integration with Ansible to deploy as a decoupled server-storage topology or a converged mode and finally a comprehensive intuitive GUI.

The company targets performance environments delivering latency at less than 100μs and 1 million IO/s per server with 250,000 IO/s per CPU core. A true block SDS, a bit confidential, but already chosen by several cloud providers as their preferred primary storage model.



Share: