Friday, April 14, 2023

Recap of the 49th edition of The IT Press Tour

Initially posted on StorageNewsletter 12/4/2023

The 49th edition of The IT Press Tour happened recently in Israel and it was the opportunity to meet and visit a few innovators that lead some market segments or shake some established positions with their new approach playing in cloud, IT infrastructure, data management, storage: CTERA, Equalum, Finout, Pliops, Rookout, StorONE, Treeverse and UnifabriX.

CTERA
Among the leaders in cloud file storage, it enters into a new era. It took advantage of this session to unveil two major announcements.

The first one is related to promotions for several key executives. Oded Nagel is the new CEO, he joined the team in the early days of the company and was chief strategy officer for 5 years, Liran Eshel moves to executive chairman, Michael Amselem becomes CRO and Saimon Michelson is elevated to VP of alliances. During this session, we measured all the progress made by the team with a significant market footprint increase illustrated by large deals in US and globally. This changes arrive after a successful FY22 with 38% revenue growth reaching 150PB under management.

The session started with Liran Eshel who spoke about data democracy, a new level to reach that goes beyond data ownership considered as necessary but not enough.

The team also shared product news and reiterates some recent new features around ransomware protection, cloud storage routing, an impressive media use case with Publicis and unified file and object or UFO. At the same time, new partners join the ecosystem such as Hitachi Vantara, Netwrix or Aparavi.

The ransomware modules embeds some AI-based development made at Ctera to detect and block attacks under 30s. The idea is to identify some behavior anomalies and then decide which decision to make and enable, this approach doesn’t rely on signature database which is consider not enough for today’s threats.

The cloud storage routing introduced with the recent 7.5 release allows to logically group buckets from various clouds. This subtle feature provides a directory level data placement to optimize data residence based on business, compliance, project or technical needs or quality of service with data placed close to users.

The other new module is the UFO with a direct S3 read-write access directly from the portal. It couples NFS, SMB and S3 under a single namespace to finally expose same content via different access methods. This is required for data integration for data pipeline and analytics projects.

The ransomware and UFO capabilities will be available at the end of June 2023.


Equalum
Data integration is a challenge and current market climate with the need of real-time analytics increases its complexity.

The firm, founded in 2015 with $39 million raised with 5 VC rounds, is a software company developing an enterprise-grade real-time data integration and data streaming platform featuring industry-leading change data capture (CDC).

With the classic explosion of volume of data, its velocity and variety, it’s a real mission for enterprises as all of them recognize that they live in a data driven world. The environment mixes on-premise, cloud and various APIs, data lake and data warehouse but the data integration of all that is tough, delicate and takes time. And as said when you couple the real time necessity it creates a real nightmare. It explains why several companies were founded to address this pain, this growing data pipeline ecosystem is now a real market segment. And clearly CDC and ETL functions represent two key frictions to spend time on for users.

The company offers a modern data integration platform based on CDC able to connect, extract, replicate, enrich, transform and aggregate data from various sources to load them into popular targets. The solution relies on an advanced proprietary CDC and streaming ETL to reach a new level of real time analytics without any code to write or integrate. In this solution, the CDC is a fundamental element of its solution that creates a real differentiator versus its competitors. In terms of market adoption, customers are like Warner Bros, T-Systems, GSK, Walmart, ISS or Siemens.


Finout
Founded in 2021, the company raised $18.5 million in 3 VC rounds on the idea to deliver an advanced comprehensive usage-based billing for cloud data services. The main idea really is to view global costs beyond cloud services providers own services.

The market names FinOps this solution segment and we have to admit that several players understand the opportunity to optimize online services bills. The team considers that the cloud is inevitable and will remain a strong force on the market for the coming years, swallowing more and more workloads and associated data.

Finout approach relies on 4 major functions: Visualize, Allocate, Enrich and Reduce. Visualize is the first fundamental brick as it offers a global view of all cloud expenses into a simple bill named MegaBill. It helps cloud-users, enterprises of any size, seeing in details what they consume and the associated cost and define alerts and customizable dashboards. Allocate, the second function, can then tag or attribute each cost item to characterize these elements and highlight trends… Next, Enrich provides connectors to external solutions such as Datadog, Prometheus or CloudWatch to show a complete end-to-end services charges. And finally Reduce, represented by Finout CostGuard, shows cloud waste and spend anomalies and displays rightsizing recommendations to optimize cost over real usage. The installed base delivers globally an average 20% yearly cost reduction which provides an almost immediate ROI. The solution is very granular able to assign cost line to various business or IT instances or applications or even sub-components.

In terms of implementation, there is no code to add or agent to deploy just connectors to setup to support a service. The product supports all major cloud providers, Kubernetes, cloud data warehouse like Snowflake, applications like Salesforce or Looker and CDNs.

It confirms once again that usage-based pricing is a major trend on the market with a user-oriented approach with Finout and vendor-one with Amberflo.

The coming KubeCon conference will be the place to discover the new product iteration Finout plans to display. In fact, it is a governance suite dedicated to Kubernetes with an agentless approach to manage, forecast and optimize multicloud operations relying on Kubernetes.


LakeFS by Treeverse
Treeverse is the company behind LakeFS, founded in 2020 with 28 employees globally, 15 developers among them, having raised $23 million raised so far. Founders came from SimilarWeb, a famous Israeli web company specialized in websites analytics and intelligence.

LakeFS is an open-source project that provides data engineers with versioning and branching capabilities on their data lakes, through a git-like version control interface. As Git is a standard for code, the team imagined a similar approach to control data sets versions with the capability to isolate data for specific needs without copying data to finally provide safe experiment on real production data. The beauty of this resides in its capacity to work on metadata without touching the data. The immediate gain are visible as no data duplication is needed so the storage cost is obviously reduced, it boosts the data engineers productivity according to the vendor and the recovery also is fast.

The second key point for such product is the integration to facilitate adoption. LakeFS splits this in 6 groups: storage, orchestration, compute, MLOps, ingest and analytics workflow. For each of them obviously the company has chosen the widely used solutions and for storage it means Amazon S3, Azure Blob Storage, Google Cloud Storage, Vast Data, Weka, MinIO and Ceph.

The adoption for LakeFS is already there with more than 4,000 members in the community with companies such as Lockheed Martin, Shell, Walmart, Intel, Facebook, Wix, Volvo, Nasa, Netflix or LinkedIn. Treeverse owns the code and the project doesn’t belong to any open source software (OSS) foundation even if they deliver the code under Apache 2.0 license model.

Essentially the product is available as OSS for on-premise and also for the cloud, by default the cloud flavor is the full version. In terms of pricing, the team offers an OSS flavor, limited but pretty comprehensive, and a cloud and full on-premise version at $2,500 per month.

The competition is non negligible with commercial approaches and open source as well coming from established companies but also from emergent ones.

The goal of the company is to become as Git the standard for data version control, we’ll see if they’ll succeed.

Pliops
Plenty of IOPS aka Pliops, founded in 2017 has raised so far more than $200 million in 5 rounds. The team has recognized the need to provide a new way to accelerate applications performance for large scale and data center environments. IT touches SaaS, IaaS, HPC and social media use cases as all of them use large distributed configurations that require new way to boost IO/s. But this new approach has to consider green aspects, resources density and cost of course.

The founding team recognized the growing gap between CPUs and storage entities based on NVMe that will hit a wall of nothing is made to address this issue. SATA and SAS didn’t generate same pressure on CPUs but NVMe changed that game, same as write amplification and RAID protection not originally designed for NVMe SSDs.

The company designs and builds XDP, eXtreme Data Processor, PCIe card today based on an FPGA and in the future replaced by an ASIC. This board is placed between CPUs and the storage layer exposing a NVMe block interface and a KV API. It supports any kind of SSDs and NVMe, direct attached or NVMe-oF based.

XDP continues to offer new key functions and today these data services cover essentially AccelKV, AccelDB and RAIDplus.

RAIDplus offloads the protection processing to XDP and delivers 12x faster in various configurations and mode even during rebuilds. One of the effect of this approach is the positive impact of the SSD endurance, whatever they are TLC or QLC, it’s even spectacular for QLC. And the last gain is the capacity expansion thanks to an advanced compression mode for a better cost/TB.

AccelDB targets SQL engines like MySQL, MariaDB and PostgreSQL but also NoSQL ones like MongoDB and even SDS solutions like Ceph. Benchmarks show high throughput, reduced latency, better CPU utilization and high capacity expansion in this domain with interesting ratios. Beyond these, XDP also delivers 3.4x more transactions per second, this is the case with MySQL or 2.3x for MongoDB.

AcceKV boosts storage engines such WiredTiger or RocksDB with higher multiple in all categories listed above. It helps to build DRAM plus SSDs based DB environments instead fo full DRAM ones that increase drastically cost. The good example of that is the config of XDP coupled with Redis.

Of course there is no serious comparison with software RAID and users with other traditional RAID controllers will receive multiple gains as mentioned above that go beyond just data protection.

XDP can be deployed in various configurations giving flexibility: pure DAS within a server, with a networked JBOF via NVMe-oF and the board installed in the remote server, same with the board in the client server and finally with multiple server and XDP in them connected to networked JBOFs.


Rookout
The company focused on developers with an observability platform tailored to live-code, on-demand. The firm was founded in November 2017, has raised $28.4 million in 3 VC rounds and has 45 full time employees today.

The idea came from the fact that development shifted to microservices, containers, orchestration in a DevOps mode several years ago, it even accelerated, but tools for developers appear to be limited especially for cloud-native applications.

Then come the observability approach in all the software development lifecycle which became even more complex for a few years now. Rookout model is about connecting to the source code with adding non-breaking point in the application. The solution generates immediate information for these instrumentalized codes thanks to a live debugger, a live logger, a live profiler and and like metrics. The outcome is rapid as it reduces the MTTR by up to 80% and the time to market with faster release process and improved Q&A.

The solution covers Kubernetes, microservices, serverless and service-mesh based applications with multiple languages like Java, Python, .Net, Node,JA, Ruby and Golang.

The firm has plenty of users like Amdocs, Cisco, Dynatrace, Booking.com, NetApp, Autodesk, Santander, Zoominfo, Backblaze or Mobileye to name a few. And, of course, integration is key again here, with partner software, cloud provides and various online tools and services.

Beyond this 3 pillars for its observability model – logs, traces and metrics – Rookout just added a 4th one with Smart Snapshots. The platform becomes even more comprehensive with this last module that delivers the next level in applications understanding and its capabilities to capture applications moments after incidents or unexpected events. It contributes to boost release time and time to market with a new level of applications quality. It goes beyond classic logs and traces as snapshots contain much more information on the reason and applications contexts.


StorONE
New meeting with this company and we had the opportunity to see progress made by the team.

The first one is related to team changes and Gal Naor, CEO, recruited Jeff Lamothe as chief product officer. This later has spent some times at various storage companies such HP, EMC, Compellent or Pure Storage. This arrival is key for the company having a clear need to promote itself thanks to a real technical skilled executive.

The team has started a new messaging angle around the Virtual Storage Container (VSC) concept. The company has made several iterations of product developments and company positioning. Since the inception, the ambition was to shake the industry with a new storage controller model really in the wave of SDS to maximize the value of storage media. They designed a solution that is cacheless with the idea to offer one of the best ROI on the market. It turns out that the team reached a new level in this mission.

VSC delivers a hypervisor-like controller but dedicated to storage coupled with storage volumes. This new entity runs on a single Linux server, doubled to offer high-availability, and connected to disk array, intelligent or just JBODs. Multiple VSCs are deployed on these 2 redundant nodes and control their own volumes with their private attribute. These intelligent controller provides multiple data services such RAID levels, snapshots, replication or tiering. With its recovery model, a 20TB HDD recovers in less than 3h and SSD in less than 5mn.

The engine is responsible to place the data on the right-volume and the admin has to configure each VSC for its specific usage. A volume could be full flash, full HDD or a mix of these 2 with potentially automatic and transparent tiering between these entities. And finally a VSC exposes its data volumes through a block interface like iSCSI, a file one with NFS or SMB or finally S3. This solution seems to be a good fit for SMB+ for specific use cases based on a pure software approach and COTS. StorONE software already runs in Azure and soon in AWS and be reference in the marketplace according to the CEO.

In terms of pricing model, as always Naor tried to make things differently from the competition, and this time, the team adopts a per drive cost whatever is the capacity. And of course, nobody reveals its MSRP.

The company with more than 60 people spent several years to built and penetrate USA, this time this is the turn of Europe and the organization is actively recruit in the region to build its European channel force.

Even if the company plans to be cash positive next quarter without any plan to raise a new round, we also understood that Gal Naor ambition is to make an IPO, we’ll see…


UnifabriX
The company plays in one of the most promising IT segment for the coming years, the CXL-based memory technology.

Founded in 2020, we met the firm at his HQ in Haifa, Israel, and we realized that the team was at the origin of several innovations such as the Intel IPU, FlexBus and IAL. They clearly are expert in data center architecture and high performance data fabric design with a proven experience at industry leaders.

Addressing the next big challenge in IT with DRAM bandwidth, capacity and cost limitations, UnifabriX leverages CXL to solve these issues. CXL is a new standard promoted by the CXL Consortium which offers memory expansion, pooling and fabric aligned with his various protocol iterations and versions. It helps large servers pools with gigantic memory attached to them coming from the rack members. Being key active participants in that CXL design and evolution, the team has built and developed one of the most advanced CXL-based solution today named Smart Memory Node that bridge servers and memory providers in a fast, intelligent and secure way.

This 2U chassis embeds memory modules and NVMe SSDs all in the EDSFF format plus its RPU or Resource Processing Unit for a total of 128TB. It already supports CXL 1.1 of course but is capable to work with CXL 2.0 and is ready for 3.0 plus PCIe Gen 5 as demonstrated at the recent SuperComputing show in Dallas in November. Several combinations are possible to connect and expand memory between servers. This event was a pivot point for CXL and the company, organizers set up a dedicated CXL pavilion with plenty of vendors, all active participant in this landscape. UnifabriX was the most spectacular one with performance gain of 26% over saturated configurations for the HPCG benchmark. The numbers shown were just impressive with very high IO/s, very low latency and significant bandwidth.

In terms of market, the opportunity is huge, projections and expectations reach impressive numbers for the coming years as soon as new CPUs will flood the field. And it is expected to take off this year as AMD already shipper CXL capable CPUs. Micron estimates a $20 billion market in FY30 and a memory pooling of between $14 and $17 billion in a global $115 billion server memory total addressable market.

Of course, this is not everyone and UnifabriX identifies the HPC as the premier demand following by the cloud and on-premise market. In other words, HPC will adopt first CXL due to their high demanding jobs for memory, serving also as the proof, and then a more “classic” server market will be addressed, if we can say that. The conclusion is immediate, you have to be ready and UnifabriX understands that.



Share:

Monday, April 03, 2023

Rookout enhances its developer observability platform

Rookout, a emerging player in developer observability service, joined the recent IT Press Tour in Israel and unveiled Smart Snapshots.

This new key component - Smart Snapshots - represents the fourth pillar of observability with metrics, logs and traces. They offer an intelligent way to rollback code at a specific moment aligned with specific application state. It maintains consistency at the code level with large datasets in memory that require a new level in debugging code. This context is kept in the snapshot itself and presents a deep information about the state of the application. The firm now has a comprehensive model with his 4 pillars approach and continues to led the pack in that domain.


Rookout will be present in a few weeks in Amsterdam for KubeCon Europe and will be able to dig into that new key module and the global product strategy.

Share:

Monday, March 20, 2023

49th edition of The IT Press Tour in Israel

The IT Press Tour is back in Israel for the 4th time for the 49th edition the week of March 26th. This tour will be dedicated to IT infrastructure, data management and storage:

  • CTERA, a leader in cloud file storage,
  • Equalum, a recent CDC and ETL company,
  • Finout, a emerging leader in cloud cost management,
  • Treeverse, a developer of Git-based data lake control,
  • Pliops, a reference in off-CPU data services,
  • Rookout, a young software company building observability tools for developers,
  • StorONE, an established SDS actor,
  • and UnifabriX, a fast growing player in CXL world.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.

Share:

Thursday, February 09, 2023

Coldago announced its Gems 2023

Coldago Research just unveiled its last Gems for 2023. It reflects the market trend, their product development and announcement, the real impact of these vendors on the market and their vision associated with their capability to execute on it. For 2023, I select GRAID Technology, LINBIT, Pliops, PoINT Software & Systems and UnifabriX.

The interesting part is than I did this analysis for more than 10 years as I started it in 2010. Here is below the full list since the beginning.

YearCompanies
2023GRAID Technology - LINBIT - Pliops - PoINT Software & Systems - UnifabriX
2022Fungible - Lightbits Labs - Liqid - Model9 - VAST Data
2021Nebulon - Pavilion Data - OwnBackup - Robin.io - Versity
2020Data Dynamics - Hammerspace - HYCU - Kasten - StorPool
2019Igneous - Komprise - MinIO - Qumulo - WekaIO
2018Cohesity - Datera - Portworx - Rubrik - Vexata
2017Datrium - Infinidat - Hedvig - Nasuni - NooBaa
2016Cloudian - Coho Data - Compuverde - DDN - E8 Storage
2015Actifio - Caringo - CTera Networks - Kaminario - PernixData
2014DSSD - Exablox - Panzura - Primary Data - Sanbolic
2013Avere - Nimble Storage - Tintri - SolidFire - Virsto
2012Cleversafe - Fusion-IO - Pure Storage - Symform - Zadara
2011Arkeia - StorSimple - TwinStrata - Veeam - Violin
2010Coraid - DataCore - FalconStor - Isilon - Nexenta

I recently also recorded a dedicated podcast for the French Storage Podcast, it is the episode 143 globally, enjoy your listening. 

Share:

Tuesday, February 07, 2023

Towards a standard in container security with Sysdig

For the 4th time, we met Sysdig in San Francisco during the recent 48th edition of The IT Press Tour. The first meeting was in June 2016, 7 years ago, just 3 years after Loris Degioanni founded the company. We have the opportunity to meet him again with Suresh Vasudevan, CEO, we met also in the past. It was definitely a very good session and we learned all the progress made the company and their impact on the market. That help us to put in perspective elements shared several years ago. What a trajectory and impact with tons of Falco downloads and business in 20+ countries. So far the company has raised $750 million and I'm still wondering why the company needs such large amount. Of course we understand in this market companies have to move fast, invest a lot, recruit people and gain market share. At the same time, the company has to protect itself against potential acquisition as the goal could be an IPO.


Since the beginning the company defines its mission: "To accelerate and secure cloud innovation" from source to run, meaning from the development to the operation phase. And this works in the loop as upgrade and changes are permanent with CI/CD considerations. They do that with a fundamental element, Falco, the open source standard for cloud-native threat detection and response. Associated with Falco, Sysdig designs, builds and develops Sysdig Secure and Sysdig Monitor, 2 commercial offerings, tailored to Kubernetes environments.

The move for enterprises to distributed computing model has obviously introduced some risks in terms of security at all levels. Vulnerability, configuration, IAM and threat detection are 4 key areas paramount to address to facilitate cloud adoption and users' success.

Falco is the foundation, it is integrated with several security products on the market like Sumo Logic, StackRox, CloudGuard from CheckPoint... The philosophy is to monitor system and cloud events and even more via a standard open API. Falco is agentless, well advanced in threat detection, recognized as one of the most advanced security engine in the domain.


Sysdig has developed a real expertise on the container security side with key people and also leverages public repositories and honeypot network plus vulnerability research. Sysdig Secure appears to be one of the most comprehensive approach in the domain today that, coupled with Machine Learning, delivers a new level of protection. As the Kubernetes environment evolves in realtime with very frequent CI/CD updates, it represents a new difficulty especially for elements currently in-use. Partners understand this and the partnership with Snyk is a perfect example of deep integration, birdging developers, security domains and operations.

At the same time, Sysdig briefed the group on the coming annual Sysdig 2023 Cloud-Native Security and Usage Report, available here. It turns out that the containers world is still very opened to threats and the access management represents an invitation for penetration as the real technique seems to be very relaxed. Also costs are not well controlled with CPU, memory and containers usage clearly in waste mode. The summary slide speaks for itself. 

Share:

Friday, February 03, 2023

Project Velero, an open source data protection framework for cloud-native workloads

During the recent 48th edition of The IT Press Tour, we had the opportunity to have a session on Project Velero at Red Hat office in Sunnyvale.

Velero is an open source tool that offers data protection, migration and DR for cloud-native workloads orchestrated by Kubernetes. And as enterprises move more and more to these Kubernetes controlled environments, it clearly become a must have, whatever the deployment model is on-premises or public clouds.

I mentioned the location of the meeting as this project is a joint effort from several active vendors such Dell, Kasten, Microsoft, Red Hat and VMware by alpha order. The web site of the project indicated clearly that the project is backed by VMware but when we checked the page pointed http://vmware.github.io/ then https://github.com/vmware/, we didn't find any project name velero. Of course on the main Github page, the search of Velero gave results. This link should be updated.


Velero comes from Heptio, it was the Ark product there, a Seattle-based company acquired by VMware end of 2018.

Already mentioned above, Velero mission is to protect stateful information within a Kubernetes cluster such etcd and persistent volumes. The difficulty in a distributed environment is to be sure everything is consistent, I should say frozen at the same time, to have a real point-in-time view of the entire application environment. This is really difficult and it represents a classic theoretical challenge to solve.


Velero has 3 components: a Velero server in the cluster, a CLI client and a Restic deamonset in the cluster as well. Velero relies on the Kubernetes API server to manage correctly etcd data and also cloud provider snapshot but also it could be a file system backup with Restic for instance, then use a data mover function to copy these data to an external object storage for backup images persistence. 

We understand that Velero is used by several key vendors such Veritas with NetBackup, Dell with PowerProtect, VMware, Red Hat and a few others.

Share:

Wednesday, February 01, 2023

HYCU picks a radical new approach for SaaS applications data protection

HYCU joined the recent IT Press Tour for the 8th time, confirming the value of the tour. And this time, they chose the tour to make a big splash, the launch of R-Cloud, their disruptive solution to protect users' data used by SaaS applications. They briefed us last week in San Jose, CA, for an official announcement today.

We saw for a few years vendors protecting always same SaaS applications, the famous Salesforce, Google Workspace or Microsoft 365, and as the opportunity is big enough for all players, they don't really address other need or really incrementally. With bare metal, virtual machines, container, cloud-based workloads and now SaaS applications, the industry has created its own complexity and it becomes even more critical and tough to protect all these data with one solution. We see even users jumping into SaaS application without any information about data protection or believing it is included. It should be but it is not and it is a real scandal for SaaS. We understand the PaaS option and the IaaS different requirements.

For SaaS itself, the complexity is huge as not all SaaS application are covered and if so it is by different solutions. What a nightmare.

HYCU took some time to think about a solution to address this fast growing issue and they came back with a real innovative approach. At the root, the team realized that SaaS applications vendors know the best how to protect data, they know their service architecture and how their services consume data. So the idea is to offer, free of charge, to these vendors the capability to build their own agent/plug-in/driver/module... with a low code/no code approach and then managed and orchestrated by HYCU Protégé. It gives them the confidence to say that choosing their service is now risk free in terms of data loss as data may be recovered.

The second key element is R-Graph, a topology view of all SaaS applications used by a company to understand their need and things are protected.

The next key component is the Marketplace where all modules are displayed very easy to deploy. The service is sold under subscription between 1 to $3 per seat per month whatever is the volume of data.

This is a real disruption for the SaaS world and HYCU has initiated a new way to control that data protection aspect. The race has started and we'll see how the competition will react to this announcement. We'll also monitor the adoption of the HYCU solution.

Share:

Tuesday, January 31, 2023

Modern cloud requires modern tools

Pulumi was one of the good surprise of the recent IT Press Tour in California as we met the company for the very first time. Pulumi was founded in Seattle in 2017 and so far raised $57.5 million, with 100 employees today but 1500+ customers and 100000+ users, pretty impressive.

The mission of Pulumi is to build and deliver infrastructure solution to enable large scale cloud and applications environments. And the associated needs group architectures with tons of services, lines of codes, and resources with a goal to updated all of them very frequently in a very secure way. In fact a real challenge at scale.

The Pulumi product, free, open source, is a infrastructure as code tool, cloud agnostic working in public and private but also in hybrid mode and is ready for various user profiles from developer to security engineer. It is already integrated with lots of partners tools and products like CI/CD solutions. The first value of Pulumi is its universal model avoiding custom developments and integrations that have difficulty to scale and evolve over time.

The Pulumi Service SaaS works perfectly with the product and shows strong security models coupled with its openness that make it the product of choice for dynamic deployment. Beyond its wide cloud support, the user can use any language, I should say its preferred one, to build its own code leveraging best practices templates. The Pulumi API invite users and enterprises to build their own web service, of course their own CLI mode and obviously CI/CD workflows. Globally it confirms that at scale the automation is the most fundamental feature in addition to security.

Share:

Tuesday, January 17, 2023

48th edition of The IT Press Tour in California again

The IT Press Tour is back in California for the 48th edition the week of January 23rd. This tour will be dedicated to Kubernetes and DevOps and we'll meet 7 companies and/or project:

  • Amberflo.io, a young actor dedicated to metering and billing,
  • CloudFabrix, a fast growing AIOps company,
  • HYCU, a leader in SaaS backup that plan to announce a new product with us,
  • Rakuten Symphony, formerly Robin.io, a reference in cloud native platform and storage,
  • Solo.io, the active player in Service Mesh and API control,
  • Sysdig, the established leader in open source security for microservices,
  • and Velero, the open source project to backup, migrate and provide DR for Kubernetes-based workload.

I invite you to follow us on Twitter with #ITPT and @ITPressTour, my twitter handle and @CDP_FST and journalists' respective handle.

Share:

Thursday, December 15, 2022

Linstor, ready for Kubernetes

The recent session with LINBIT during The IT Press Tour was good as we had time to dig into LINBIT’s Kubernetes strategy essentially covered by Linstor, a dedicated storage solution for Kubernetes. As the DNA of the company, the product is open source under the GPL license and goes beyond Kubernetes also supporting OpenNebula, OpenStack and OpenShift but it is above all a generic storage volume management for Linux. It’s obvious that the product targets persistent volume for stateful workloads.


The architecture resides on multiple services with active/passive manager named the Linstor controller, one active at a time, and a series of agent, called Linstor Satellite, deployed on each node in the cluster under the controller management. The Linstor Controller is managed via API or CLI. Linstor also integrates DRBD for data resiliency at the block level.

For Kubernetes, Linstor exposes CSI and LINBIT also develops an operator named Piraeus with Doacloud. This project is a sandbox CNCF project. As an analogy it plays the same role as Rook for Ceph. Definitely one of the few right storage management service to associate for each Kubernetes cluster deployment.

Share: