fbpx

Scalable and Sustainable Supply Chains with DLTs and ScyllaDB

25 minutes

Register for access to all 30+ on demand sessions.

Enter your email to watch this video and access the slide deck from the ScyllaDB Summit 2022 livestream. You’ll also get access to all available recordings and slides.

In This NoSQL Presentation

Explore how IOTA addressed supply chain digitization challenges, including the role of data serialization formats (EPCIS 2.0), Distributed Ledgers (IOTA), and scalable, resilient databases (ScyllaDB) across specific use cases.

José Manuel Cantera, Technical Analyst & Project Lead, IOTA Foundation

José holds a Master Degree with honors in Computer Science from University of Valladolid (Spain). He has also completed different specialization courses on Economics, Applied Research, and GIS. Currently, he is Technical Analyst and Project Lead at the IOTA Foundation.

Video Transcript

Hello and welcome, everybody, to this presentation of the ScyllaDB Summit 2022, Scalable and Sustainable Supply Chains with DLT and ScyllaDB. My name is Jose Manuel Cantera. I’m a technical analyst and project lead at the IOTA Foundation, the organization in charge of developing the IOTA Tangle, a public permissionless distributed ledger. First of all, who am I? My name is Jose Manuel Cantera. I’m based in Spain.

I am leading different initiatives dealing with the digitalization of global trade and supply chains, leveraging the IOTA distributed ledger. I have more than 20 years of experience in the industry working mostly with open source projects, open source communities and projecting that work to the open standards like W3C, ETSI or GS1, and I enjoy coding and hands work a lot, and I hope this presentation can be interesting for you. First of all, let me introduce the problem around supply chain digitization which we think it is a great challenge ahead, and there are different reasons. The initial reason is that there are multiple actors and systems generating data, and these actors and systems in which identity is the key theme can be all those that you are seeing there, so suppliers, OEMS, food processors, brands. If you think, for instance, in the automotive industry, we have suppliers. We have the car assemblers, the OEMs, but we have other actors, the car buyer, the consumer, the dealers, repairers. So many, many different actors integrating data along the supply chains. But on the other hand, we have multiple relationships. They are business-to-business relationships. For instance, we do not supply it on an OEM, but there are business-to-consumer relationships. But when there are importation and exportation processes where our regulation is happening, there are business-to-government and government-to-government relationships. And in general, there is no central source of truth or central anchor for the information that all these multiple actors are generating. So in general there is no central point of trust, and it would not make any sense because we are talking here about relationships that can go cross-border or cross-organization. And around supply chain digitization or in modern supply chain digitization, there are different functional needs, where the key thing is having trust between the different actors through verifiable data. And the main functionality that is required, and I will explain more about it later, is traceability. Traceability is going to be an enabler for compliance, product authenticity, transparency, provenance with a view to different kinds of applications. One of the most important that you all probably have heard about them is ethical sourcing, food safety, but any other around those functional problems that I have mentioned before and I will explain later. The first use case I would like you to think about is what is happening when cross-border trade operations are in place, and I have to say that this is a multilayered domain, and there are many different problems that have to be solved at different places. So import and export procedures imply regulatory aspects where different documents have to be interchanged between exporters and importers but also exchanged with the public authorities in the business-to-government relationships, but also from the point of view of the transportation between the exporting country and the destination country. There are, again, multiple actors appearing. You have [Indistinct]. You have ground transporters , borders. You have the customs, and these interactions have to be traceable, and these interactions between the multiple actors have to be trusted but without a central anchor, without a central point of trust. And you see other subdomains. You have the trade procedures where, in this case, we have invoices, CP notice, any kind of document that has to do with a commercial transaction. And last but not least, you have the financial procedures, so the pure financial transaction between the two parties, and you can see here that this is a multilayered domain, and the relationships here between the different actors are quite complex. And what we are doing to solve or to digitalize these cross-border trade problems? Well, in the IOTA Foundation, we think that the best way to deal with this is to start materializing some solutions that can allow us to explore how these kind of problems can be solved. And this is the TLIP initiative that we are developing today with TradeMark East Africa which is another non-for-profit organization devoted to stimulate trade in the East Africa region. And what we are doing in this project is that we are allowing different actors, different government agencies and the private actors, the traders, to share documents and to verify documents in a single .. . in one shot. Whenever a consignment moves between East Africa and Europe, all the trade certificates, all the documents can be verified in one shot by the different actors, and the authenticity and the provenance of the documents can be traced properly. And as a result, the agility of the trade processes is improved, the efficiency and the effectiveness. And how these actors are sharing the documents between them is through the infrastructure provided by the IOTA distributed ledger using an architecture that I will describe later. So another use case, a very, very important use case is end-to-end supply chain traceability. So in cross-border trade, we are trying to solve the problem of sharing the different documents, verifying the different documents, but there is some other problem which is, what is the provenance of the trade items that are interchanged in an import-export operation? And that’s another very, very important problem to be solved. When we think about traceability, we need to think about the definition of traceability given by the United Nations. And in principle, traceability implies the capability of tracing the history about something, okay? So in the case of trade items, we need to know about what has been happening with that particular trade item, not only from the point of view of the transportation but from the point of view of the origin of that item, and I will put you an example. Why we need to trace the history? Because when one of the actors or the provider of a particular trade item is making some claim, that claim should be verifiable. How? Through the distributed ledger technology, for instance, as I will show later. But in general, these claims have to be related with sustainability, with safety in the case of food, for instance, or in the case of how a particular … the production of a particular item impacts the environment. In general, the schema for traceability … and this is … the source of this figure is GS1, is the one you are seeing there. So just to illustrate that with an example, we can think about a potato producer. A potato producer is a farmer, and this farmer is going to sell his product to Chief Potato food processor, and the Chief Potato food processor is going to transform the potatoes into a bag of potato chips. But on the other hand, for producing the potatoes, the farmer has to use some fertilizer, and that fertilizer has been produced by another manufacturer of fertilizer using certain raw materials. And here the key thing is that in order to be able to trace the bag of potato chips, you need all the history starting from the bag. The bag, of course, has been created in the factory using some, not only potatoes but some oil, et cetera, et cetera, and then the potatoes have been harvested at a particular point in time, and they have been fertilized using a particular fertilizer. And using this description, this description, in computational terms, can be defined as events, and they are called critical tracking events. Why? Because they are the events that allow us to control the traceability of the bag of potato chips. The other point about the critical tracking events are the key data elements. So each of these critical tracking events have key data elements that describe them, and they have five dimensions, the who, the what, the when, the where and the why. And you will see that there is also a sixth dimension, which is the how, that I will explain later in the new … associated to the new standard EPCIS 2.0. So now that I have introduced the problems we are trying to solve, it’s time to talk a little bit about what are the … what is the reference architecture and the standards we can use to build this and, of course, the technologies, like ScyllaDB, that can help us to implement solutions to the problems that I have introduced before? Well, we need to think, first of all, about the technical challenges. So the initial technical challenge is data interoperability. So here, we are having different actors that need to interchange data among them, but this data has to be understood by the different actors using a common syntax, using reference vocabularies, so that there is semantic interoperability. But, of course, not only that but also extensibility because the data interoperability standards shall be extensible for the … for meeting the needs of a particular industry, for instance, the automotive industry, the seafood industry or other industries. And here the main actors in the standardization are W3C with JSON-LD, GS1 with EPCIS 2.0 that I will explain later, UN/CEFACT which provides edi3 which provides reference data models, and also there are sectoral standards for data interoperability, the DCSA on the maritime transportation, MOBI on mobility and automotive and, for instance, the Global Dialogue on Seafood Traceability to name a few. Second challenge is having a scalable data source. You mean, if we are tracking every single item in the supply chains, we are going to need to store a lot of data, and this is a big data problem. And here, ScyllaDB provides many advantages to implement this kind of system because we can keep … we can scale very easily our data. We can keep the data for a long period of time at a fine granularity level, but not only that, we can combine the best of the worlds of NoSQL with the SQL world because we can have robust schemas for having robust data and trusted data. The third pillar or the third technical challenge is our scalable, permissionless and fearless distributed ledger technology, and here the IOTA distributed ledger in combination with a storage containing [Indistinct] protected storages like IPVS can provide the functionalities around data and document verifiability, auditability, immutability within these peer-to-peer interactions because, in our approach, the interactions between the different actors in the supply chains are peer-to-peer, and we are pursuing decentralized architectures. So here there are no privileged actors that have all the data. So the data is distributed among the different participants in the supply chain, and here the Web3 approaches and peer-to-peer libraries are playing a key role. Well, when it comes to data interoperability, I would like to introduce the GS1 EPCIS 2.0 standard. GS1 EPCIS 2.0 allows to represent the different supply chain business events in JSON format, in this case JSON-LD, and also offers a REST API that allows to query at will these events. And what is the content of an EPCIS 2.0 event? Well, it is quite simple. You have the five dimensions that we were talking before. So you can capture, for instance, in a packaging operation, you can capture what items are being packed in a particular crate, when it happened, at which place, for instance, at a warehouse, what was the process, but furthermore, you can also capture things like physical conditions, the how. So what was the temperature when that happened? And you can imagine all the applications. For instance, all the applications for call chain traceability when the goods, certain goods, for instance, vaccines, are moving between A to B, and the call chain has to be kept, the EPCIS events that can be reported by the actors participating in the transportation can allow to verify that the conditions and the compliance of the transportation was made. And an EPCIS event is just a piece of JSON-LD, JSON content, where captures in a standard way all these dimensions, but the good thing is that an EPCIS event, since it is based on JSON-LD, it can be extended. It can be extended with sectorally specific vocabularies or with specific terms that can model extra events that can be useful for a particular application. So if we have all these components, how we can be the reference detector for this kind of solution? Well, first of all, we need … we have the physical layer, where all the data is being captured. The data can come from RFID readers, the scanners, can come from GPS, can come from tractors in the field, can come from factories, et cetera, et cetera. Then this data is going to the data capture layer, where the data coming from the devices is exposed, and here, the data can also be exposed by any kind of systems, many kind of OT systems or IoT platforms in the data capture layer. Then, from the data capture layer, the events generated have to be transferred into business events, into EPCIS events. And here, of course, is where these events have to be, of course, distributed to the different participants in the supply chain. So for instance, if I hire a particular transporter, where the transporter is moving the goods that I have commissioned him to move, I can get a notification. I can get an event saying, “Yes, I have started to move your trade items through the supply chain,” and all these events, of course, have to be recorded, not technically recorded but have to be committed to our distributed ledger. So what does this mean, committed? That the originator of the event generates a transaction on the distributed ledger that can be later used by any participant in the supply chain to verify the authenticity of the event. But furthermore, the originator of the event wants comments to the event, the event is resistant to modification. So he cannot modify it because if the event would be modified, the verification step will fail, and, of course, your supply chain partners will complain. For doing this, of course you need the identity. And how the identity of the participants in the supply chain can be captured? Well, this is done through decentralized identity, these data as well anchored to the DLT layer. Last but not least, these data share a layer between the different actors in the supply chain has to be … So we are getting a lot of events, but these events have to be consolidated in a view about what has happened with a particular item, and this is in the information management layer. So in the information management layer, you can have inventory management, the item-level tracking which is the digital twin associated to the different items, the catalog information which is the static information, the trade documents. All this information management layer is like a consolidation layer of the data-sharing layer. It’s the consolidation of the layer for the information generated in the data-sharing layers. And, of course, an important aspect that we have experienced a lot in the TLIP project is that existing IT systems could be single-window systems for trade operations, supply chain management, et cetera, they have to be integrated some way, and this is another experience, one of the biggest challenges is the integration of existing supply chain systems or single-window system into this modern layer based on distributed ledger technologies. And finally we have the application layer, and here the different actors can use applications to benefit from this data originated in the supply chain. So here we can have back office applications, supply chain optimization, track and trace, product transparency for the consumers, any kind of cross-border trade a city’s TLIP, where we have a simple application that allows the different actors to, in one shot, verify the different documents associated to trade transactions, et cetera. And, of course, these applications can depend a lot on the use case, but the bottom line is that, thanks to the lower layers and thanks to having reference architecture and thanks to having key standards like EPCIS 2.0, we can build these applications following similar patterns. And the important point about these applications on this architecture is the centralization. So we are not assuming that the data is going to be stored just in one place and by one actor. The data potentially can be spread among multiple actors, and the way of having trust between the different actors is the distributed ledger. So that’s the key point. So what is the role of ScyllaDB in this architecture? Well, we can think about different use cases. So in the automotive supply chain, so if we consider an OEM with 10 million cars manufactured per year, and each car, we’re going to see that it has 3,000 trackable parts, and each part can have a lifetime of 10 years and that part .. . Each part can generate 10 business events, we can think about 300 billion active business events that can potentially very well store in ScyllaDB. So if we think about a maritime transportation operator, that it is moving 50 million containers per year, we’ve got to think about 2,500,000 active events. So … and this is only from the events repository, so the EPCIS 2.0 repository, but there are other layers that require this level of data scalability, the item-level tracking, the inventory, catalog, et cetera. But actually our distributed ledger technology, and this was presented in previous ScyllaDB Summits, our archiving node, our Permanode, that contains all the transactions generated in our distributed ledger, as already you’ve seen the technology and has already adopted the technology of ScyllaDB. So as conclusion, supply chain digitization has a lot of technical challenges, and within that, the business as usual, the bespoke solutions do not work. So it requires interoperability. So it’s important to align with the open standards, EPCIS, the decentralized ID coming from W3C verifiable credentials. It requires reference architecture to guarantee that semantic interoperability and some reusable building blocks are used. It requires the decentralization, and for decentralizing your data, you need the distributed ledger technology and particularly public, permissionless and fearless distribute layers like IOTA complemented with IPFS, and it’s going … more going through the apps, decentralized applications. We need, of course, data scalability and availability, and ScyllaDB is the perfect partner here, and last but not least, this trusted data sharing. So the key theme for having trust between different actors, where the data is distributed and decentralized is the distributed ledger and the decentralized identifiers. So thank you very much for attending this presentation. Let’s keep in touch, and I will be happy to provide more details.

Read More