Skip to content

Episode 19

Transforming Freight Logistics with AI and Machine Learning

Luis Moreira-Matias

Transcript
Share this:

Description

Luis Moreira-Matias is Senior Director of Artificial Intelligence at sennder, Europe’s leading digital freight forwarder. At sennder, Luis founded sennAI: sennder’s organization that oversees the creation (from R&D to real-world productization) of proprietary AI technology for the road logistics industry.

During his 15 years of career, Luis led 50+ FTEs across 4+ organisations to develop award-winning ML solutions to address real-world problems in various fields such as e-commerce, travel, logistics, and finance. 

Luis holds a Ph.D. in Machine Learning from the U. Porto, Portugal. He possesses a world-class academic track with high impact publications at top tier venues in ML/AI fundamentals, 5 patents and multiple keynotes worldwide – ranging from Brisbane (Australia) to Las Palmas (Spain).

Show Notes
Resources
  • Career Transition from Academia to Industry

    • Early exposure to practical ML applications in business during PhD.
    • Natural progression from academia to industry roles.
  • State of the Freight Industry

    • Highly fragmented, inefficient industry with environmental impacts.
    • Opportunity for technological disruption to improve efficiency and sustainability.
  • Role of Data, ML, and AI in Sennder’s Operations

    • Transforming traditional freight brokerage with digital marketplace.
    • Predictive and dynamic pricing for improved efficiency.
    • Automated load matching and personalized carrier recommendations.
  • Technological Challenges and Solutions
    • High stakes in B2B context with low margin for error.
    • Emphasis on incorporating business knowledge into ML solutions.
    • Strong monitoring, feedback loops, and fast response to issues.
  • Structural Approaches in ML Teams

    • Split roles: Technical Product Owner for customer focus, Engineering Manager for technical oversight.
    • Avoids overloading single managers, ensures focused responsibilities.
  • Successes with ML Products

    • Evolution of pricing algorithm, adapting to different business needs.
    • Impact on company growth and customer satisfaction.
    • Recommender system tailored for B2B context.
  • Reducing Environmental Impact

    • Mission to make road logistics more sustainable.
    • AI and ML-driven optimization to reduce empty runs and CO2 emissions.
    • Advanced scheduling and network optimization for efficient fleet management.

Luis Moreira-Matias LinkedIn Profile:

Dr. Luís Moreira-Matias | LinkedIn

 

Sennder Page:

sennder Technologies GmbH | Digital Freight Forwarder

 

The Data Scientist:

Media – Data Science Talent

Transcript

00:00:00

DD: This is the Data Science Conversations Podcast with Damien Deigh and Dr. Philipp Diesinger. We feature cutting edge data science and AI research from the world’s leading academic minds and industry practitioners, so you can expand your knowledge and grow your career. This podcast is sponsored by Data Science Talent and Data Science Recruitment Experts. Welcome to the Data Science Conversations Podcast, my name is Damien Deigh, and today we’re talking to Luis Moreira Matias. Luis is the Senior Director of Artificial Intelligence at Sennder, Europe’s leading digital freight forwarder. And at Sennder, Luis founded sennAI, which oversees the creation of proprietary AI technology for the road logistics industry. And there they take concepts from R&D all the way to real world productization.

During his 15-year career, Luis has led over 50 full-time equivalents across four organizations to develop award-winning ML solutions that address real world problems in a variety of fields, including e-commerce, travel, logistics, and also the finance sector. Luis holds a PhD in machine learning from the University of Porto in Portugal, he possesses a world-

00:01:30

class academic track record with many high impact publications in ML and AI fundamentals. He holds five patents and has delivered multiple keynotes around the world, spanning Australia, Spain, and many others. Luis, welcome along to the podcast. It’s great to have you here.

LMM: The pleasure to be here, Damien. Thank you for invitation.

DD: Okay. So, you have a very varied background, Luis, can you please tell us how you made the transition from academia and research into industry?

LMM: Sure, sure. That story starts with the inverse transition. So, when I finished my master’s on software and engineering, I went to consultancy house, still back in Portugal, work on data reporting tools. Okay. Two years. And then I got the challenge to move back to academia and do a PhD on machine learning, not any PhD on machine learning, but a PhD that was industry funded. Okay. So, indeed, I joined Blue Sky Research lab, which was full of mathematicians and statisticians doing novel approaches to solve problems within machine learning and beyond within the realms of artificial intelligence. But I was always the engineer in the lab that always the mindset of building things that can change the world around us. Okay.

So, when you go with these mindsets with the PhD that is already funded by companies that can provide not only the money, but then also the use cases and the data for you to apply the R&D that you are doing in the lab, 

00:03:23

you get very early exposure to doing ML products and doing ML, its business in your PhD. And that’s what happened to me. So, that transition, I would say was the natural, was the natural step. So, by the very nature of the work I was doing and of the skillset I was acquiring, the next natural step was not a position academia, the next natural step was a position in industry and happened organically. So, then as a fruit of my work, I believe I got type of exposure, which made Ed Hunters reach out to me back in time on a social network. And then, three months later I found myself already in Germany, even if my PhD would just finish formally six months later. Yeah.

DD: Fantastic. And you went from a variety of industries and you’ve ended up at Sennder, which is in the freight industry. What attracted you to work in the freight industry?

LMM: I was always a transport person, so my childhood I lived between two houses in city of Porto. And both houses had like 800 meters, 500 meters, perhaps 500 meters away at a train line passing by. Okay. And I always picture myself where that train is going. Okay. Is a cargo train, is a passenger train, what the train is going? Where that train could get me to. So, that curiosity, okay, about transportation then ended up having an importance later on in my life, on the choices I make. My PhD was a PhD with applications to transfer to mobility.

00:05:10

So, both light trams, but also buses, and taxis. Okay. So, freight is just yet another manifestation of transfers and mobility in the whole. What I believe that happened is in Europe in the last decade for a series of reasons, we didn’t have created the conditions to where the company that could disrupt seriously the transport sector with AI such as, for instance, as Uber and Lyft did in US in last decade. And I believe that right now, those conditions are gathered and Sennder is very well positioned to be one of the top three across overall transport industry to do it so in Europe in this decade. Yeah.

DD: Okay, fantastic. And talk maybe a little bit about the context of the industry when you joined. Obviously, it was a very old traditional industry in terms of how they operated. So, perhaps give us the sense of what you found.

LMM: So, the road freight industry, we are talking industry of €400 billion industry where the top three players just operate roughly 5% of that business. So, it’s a very fragmented industry and it’s very inefficient. Roughly 30% of all kilometers run by all these trucks are run empty. You can imagine what is the footprint, not only economic, but also environmental that this has. So, there was really a huge opportunity to change the world, as we know with technology there. And on the other hand, it was very analog, so it was already operated very efficiently. Also, from the operation side of it, it was like human driven and were very, like 

00:07:19

many times companies were still asking to us, which is our fax number. I didn’t know that still existed. But in the industry still the reality, still today for many companies, not for us, but still today, for many companies, there is huge opportunity to disrupt this industry to make a change that will last forever. And this really attract me to make me learn Sennder.

DD: Okay. So, can you maybe give us the overview of the role of data ML and AI in terms of Sennder’s operations?

LMM: Let’s think about traditional business model of freight forward. So, on one side, we have large companies that want to move bulk containers of goods from A to B, let’s say from Warsaw to Paris, right? And what they used to do, they used to do dozens of phone calls, email, faxes between these big companies and what we call a freight forward or a broker. Then on the other hand, what you’ll have is typically small trucking companies, 90% of these trucking companies in Europe probably have a fleet of 15 or less trucks. And many have like just one person, which is the driver that is the company with this truck. So, these gigantic companies, these very small companies, they operate in different business models. So, because of that, they prefer to have a broker in the middle that understands this business model can serve as a proxy.

00:08:53

And traditionally these brokers are contacted analog, okay? And they can be even like second line broker and third line brokerage. We can subcontract, and subcontract, and subcontract until we contract the actual operator that will do the service. All these decisions that are made in the meanwhile are made very subjective, are made based on personal connections, based on… Then the services performed, eventually, if there is no fraud or if there’s no doubtful quality. And then there are more problem, more emails, more phone calls, more faxes, more letters to protests, to give feedbacks to somehow also get paid or make sure that we receive invoices and so on and so forth. So, the idea of senders digitalized all of this into a marketplace where all the info on one place, we bring transparency and we improve largely the customer experience on both sides.

So, the shipper can come to us, or these large companies can come to us and advertise a service request on the platform. Whereas then the carriers, which are these truck companies, small truck companies on the other side can log in on the marketplace and then actually bid or just simply take one of the services available. The problem is, on this marketplace nowadays, we have seven to 10,000 loads every day there. So, and if we only offer bids, we’ll have a long time to bid, right? So, the shipper will need to wait until the bid is finished in order to get somebody 

00:10:38

to serve. And then also the carrier many times have no idea on how to bid that load, which bid should I do? So, there is no transparency on the bid, so then they have no idea even how much money they should put into.

So, they are used to have a human to discuss that with and to trust this human. Okay. So, that’s where the human factor then. So, predictive pricing is one thing that is absolutely can be absolutely disruptive or dynamic pricing can absolutely disruptive in this marketplace. We can enable a one button sell, okay? So, end up predicting which is an acceptable and optimal price for everybody involved, for the shipper, for us to guarantee that we have a brokerage margin, but also guarantee that a carrier has enough money to pay the cost of their trip and still do some business for their side as well. This replaces a whole chain of analog and digital communication and the human in the loop to have one price that everybody would trust, and accept, and move forward. And also reducing the bid time for hours and days to null as long as like the right carrier logs in and accepts that price and that load.

Another big problem is finding the right load. So, searching throughout the 7,000 to 10,000 loads for the carrier is a big hassle. And this marketplace is a marketplace that is different. For instance, the traditional retail marketplaces where we want customers to stay long and make a big basket of items. Here, it’s not the case, we want the customer to stay as 

00:12:23

short as possible and to, because they will just get one load there, here maximum. And then once they get the load, they know exactly… They can go back to what makes money to them, which is drive their truck. So, find the right load is key to minimize this time in the platform to maximize conversion rate. And there is that we have a recommendation system that can guarantee personalized experiences to our carriers, which are able to recommend three to 10 loads at login time, which we believe that these carriers are willing to perform. This combined makes a marketplace that probably meets no parallel in the market, in the European market today.

DD: So, essentially, I think what you’re describing is that you’re the Uber style marketplace for the freight industry. Instead of getting a passenger from A to B, you’re getting the cargo from A to B by connecting the lorry, the driver, the taxi with the the customer, which is not a person, it’s a load. So, obviously significant technological challenges there, Luis. So, maybe talk about some of the challenges around setting this up and where did you guys start?

LMM: For us, it starts always on setting what success looks like. And that seems very a cliche, right? But if you think about that one degree of obstruction deeper, you’ll see that it’s not because when it looks to dynamic pricing is whether what it makes a price to be a good price. That is really subjective, isn’t it? Okay. What is a good price now may not may be a good price in 

00:14:12

five minutes, okay? So, you need to convert, but you need to maximize your margin as well. And these two things play against each other, ’cause the larger the margin, less likely you are to convert. So, it’s a really tough problem. And if you think about the recommender, you’ll have a similar experience. On the other hand, this is the main problem of doing ML in the B2B businesses like Sennder.

So, we cannot afford big mistakes. Why? Well, so if you think about traditional B2C, so you do your business scale by scaling up the number of customers that you have. If you do a mistake and the customer has a very bad experience, you may lose that customer. That’s fine, you have a few more other millions to catch up for that. And as long as you do well on the 98% and 99% of the customers, you’ll do just fine. Here that is, it’s not true because you are dealing sometimes with massive companies so that it’s enough that you mess up with one or two loads to lose a customer that will cost you like 10% of your revenues, okay? That is something that you cannot afford to do if you want to be a profitable business one day and we want to accelerate your profitability. So, here you need to be really careful, okay?

And the problem is that we came from a decade where big data was a thing and we thought that we could use all the data in the world that data was inexpensive. That here, that is not so true, right? Because the largest 

00:15:20

source of data that we have, its operational data, and doing a mistake on this operational data means, for instance, that we mess up with the price of a certain load, but each load, the average price nowadays is something between €1200, €1300. So, it’s not the price of a converse shoes. Okay? So, so it’s really something big. And then when you do a mistake, you can pay big time on short run, but also on longer run with your customers. So, this is doing experimentation, doing modeling, getting away of traditional frequentist statistical learning approaches that assume that you can do well on 99% of your cases. So, afford to have 1, 2% of mistakes and you’ll do fine. This is challenging because it’s different than what most machine learning applications in real world that are successful, that successful business are doing so far.

The third thing that also like is really challenging is like formulate the problem and find representations and features that are able to model this problem and the outputs of this problem in function of time. So, these are dynamic systems. So, what is a good recommendation today, may not be a good recommendation tomorrow. What is a good price right now may not will be a good price in five minutes? So, how we incorporate in our model, components that can evolve over time, can evolve with time, their predictions, and it can also evolve the patterns that we are learning over time, that we’re continuously learn. It’s something really challenging, 

00:17:40

again. So, there are applications or I think about financial systems and stock markets where this is also a problem. Again, here with B2B, so it’s like the price of mistakes is what makes this on a, in my opinion, on a degree of complexity above the rest that we have to be dealing with in a daily basis.

DD: Yeah, you’re right. B2B is always a more complex environment to do anything. And how did you navigate those challenges, i.e. the very low margin for error that you had? What did you do? What methods, or what thinking, or philosophies did you use to navigate that?

LMM: One thing is very important is, of course, we are strong on experimentation, and keep doing like a lot of empiric experimentation, but we are also looking to incorporate a lot of business knowledge. So, work very closely with business to learn from them and to incorporate knowledge from them on the way we create the systems. Maybe a direct way, which data sources to use, how to use it in the model, use it directly, uses as a rules, use an eligibility rule. For instance, in the model training, saying that for instance, common sense. So, force the model wherever model is done, we have to say, for instance, that the price per kilometer will be as lower as longer is the distance.

 

00:19:24

So, and we can, there are ways, so mathematical ways to sell to sub families of models, okay? This is a constraint that you have on your optimization problem. So, you need to create a model and you do it from data, but this must be true whatever model you generate. And these constraints are then designed based on the input of business experts, which pushes towards have more basic approaches, which is the alternative to frequentist approaches on the creation of our models and overall, on the creation of our systems as well. Then we have to have strong monitoring and feedback loops that of course we are not prone to. So, we do have errors and we do have problems, but we have to react to those.

They have mechanisms to react to those even by letting operations, having mechanisms to counter. See, okay, we are having a problem with the models. They are the first to be able to counter and go around this model if needed. But also, mechanisms to give alerts to us and also mechanisms on how the team works on a daily basis to be able to react fast to this and find solutions. I’ll not say in real time, but I ask them to do it in real time. So, perhaps real time is a good [laughter], is a good metaphor to that. Yeah.

DD: Good. Set the bar high, Luis. Yeah. With expectations. Good, good.

00:20:54

LMM: Yeah, they suffer with that. They suffer with that. Sorry guys.

DD: [Laughter]. So, you mentioned something really important, which is the working closely with the business. How do you make sure that that happens with very technical, very smart, very intelligent engineers or ML practitioners? How do you ensure that they do work closely with the business? What’s the structure?

LMM: Sometimes what I see on many machine learning teams is that they have the engineers and they have the manager. And the machine learning teams, I would say the managers are about this mythological figure that is capable of doing everything. So, this person does hiring, this person does employer branding and goes on podcasts like these and writes blog posts, this particular stakeholder management and goes to explain like a five-year-old child, how the model works. This person is also the person that needs to see. So, be responsible by the team’s outputs, be responsible by the model results, do performance reviews, do one-on-ones and this, I never saw such a creature. Perhaps it’s from Greek mythology or so, so I never saw in practice somebody that could do well all these things. And what ends up happening is one, or two things that a manager try to do this and they get burned out, or they end up being negligent to some of these components. And then sooner or later will cost a price.

00:22:28

Typically, one of the things that falls more short is the stakeholder management because machine learning teams end up being naturally already, not very well understood by the company. So, be socially on the corner on the companies. So, if they are on the corner of companies, they assume, okay, we’ll always be on the corner of the companies. So, I don’t need real to create connections with the business. So, I just appear to the meetings, I hear where they have to say, and then we go back to our day-to-day routine. So, who to blame? I think everybody, nobody. Everybody, because they don’t, so the manager and the others don’t see, okay, this setup is not the best for success and also, nobody because yeah, well, it doesn’t have time for everything. So, one of the things that I introduced in center, and I deeply believe that can make a real difference in how teams can succeed or fail is to have a technical product owner in the teams.

And they have a matrix organization on how to approach leadership on ML teams. What this means, this means that, so we don’t have this mythological figure of a manager. We have two humans, okay? Actually, that sit in the teams with different responsibilities. We have this technical product owner, which is the representative of the customers in the team, which is the one that how I used to say to my team members. And they will laugh when they hear this, have to do walk in the parks with the stakeholders, see the sunrise and sunset with the stakeholders. So, build 

00:24:03

trust, build connections, listen and work together with them. And then with these two things, be the gatekeeper of the team. So, the team keeps focus on work that is really interesting and protected from sometimes some rans that stakeholders wanted, some like off bake ideas that they want to implement that are not really like little moving, but that same way be the gateway to the team.

So, when there is an emergency, when there is a real problem, this person is the one to go to make sure that the team drops everything they’re doing and look into that. So, this person is responsible for roadmap building, is responsible for backlog prioritization, and is the voice of the customer in the team. Then when it comes to technical leadership, to hiring, to employer branding, to performance review, so, make sure that the machine works, here is where the figure of the engineering manager sits on. So, it doesn’t need to concern on what the team needs to do next, what is the next priority. It just needs to be concerned on the how, how will we be solving this problem? How well we’ll help the team to solve this complex technical problem. And this is like more than enough scope to keep this person busy.

I would say, happily busy throughout a healthy working week of 40 hours. So, I think this gives balance and this is also a professionalization of leadership in ML teams that is duly needed, especially knowing that these 

00:25:35

teams are always like a bit more far away from the business and a bit less comprehended. And at the same time, everybody’s much more curious of what they are doing because they’re doing this magic. So, these leaders always have like extra work to explain, to prepare good presentations, to make sure that we pass the message. And I believe that this is a way to get them closer to success.

DD: Yeah. I think we’re increasingly seeing companies finally adopt technical product owners, product managers, they seem to come from a wide variety of backgrounds. How do you structure it? Do they have one project that they primarily focus on? Are they focusing on multiple projects? A product owner at a time?

LMM: Each team has a scope, right? So, scope of problems that fall within the team and the product owners are allocated to one or more teams. So, whichever is the scope of the team. So, the scope of the team, we try to not be overlapping with other teams that are always overlaps here and there, natural overlaps here and there will try to minimize those overlaps. And then the product owner is responsible for that scope. So, whatever problems are prioritized by product leadership to be solved by AI teams owned within that scope, let’s say pricing will fall under that product owner to be the customer representative in the team, or in the related teams about what is to be done next. Then what we have is a need to be very 

00:27:15

crisp on how success looks like. And here it’s important that the technical product owner comes not from a background of an MBA.

So, but the background more of computer science preferentially of data science and ML. Because that’s why we call them technical product owners and not just product owners, right? Because they have this ability of being able to translate business to technical language. The technical language is not a Jira ticket, the technical language is ML language. So, I want this person to be able to be also critical if needed of the work of engineers and challenge for its estimations and so on. So, I would say this is like the two big constraints that I use when to set up these product owners and these teams to success.

DD: And do the product owners, technical product owners usually have domain knowledge? Do you think that’s essential?

LMM: Let’s say, a big debate on my community, whether leaders on ML need domain knowledge, by instance it does not hurt. But it’s primarily, it’s not what I search when I’m hiring on this space because it’s difficult, very difficult to find leaders, competent leaders, experienced leaders in the ML space just because the talent pool is scarce and the area is relatively new. So, and you need to drop something. Typically, I end up dropping domain expertise and the reasoning is simple. I believe that this is something that 

00:28:59

we can help talent to acquire throughout that tenure with us. We in the company already have a lot of domain knowledge. So, what you need is this person to be collaborative enough on the ways of working to come and then be humble enough to learn what is their drawbacks.

And I prefer to invest and be able to keep the high bar, for instance, on technical standpoint and on soft skills such as being humble and being extremely collaborative on that approach, and be egoless as a leader or in the technical side. They have a good grasp of data science fundamentals and the managing fundamentals that you are able to understand the ML product lifecycle very well and challenge it end-to-end rather than how many years of experience in logistics that this person has. Because we have people in the company several, with 20, 10, 15 years of experience. They can help him or her to catch up on that. Yeah.

DD: I would like to take a brief moment to tell you about our quarterly industry magazine called the Data Scientist and how you can get a complimentary subscription. My co-host on the podcast, Philip Dessinger is a regular contributor and the magazine is packed full of features from some of the industry’s leading data and AI practitioners. We have articles spanning deep technical topics from across the data science and machine learning spectrum. Plus, there’s careers advice and industry case studies from many of the world’s leading companies. So, go to data science 

00:30:35

talent.co.uk/media to get your complimentary magazine subscription. And now we head back to the conversation. Changing things up and maybe looking at some of your major successes in terms of ML products and solutions. Can you talk about the pricing algorithm that you’ve been working on for I think maybe three years now?

LMM: So, this is a use case that that exists in the company since 2021. I joined in April, 2022. The algorithm has been having several mutations. The challenge, two, three main challenges there. One is like the different nature, our business being homogeneous and inside of our business we have different flavors of business. We have contract business, we have spot business, contract being something that is like with a price agreed for a long period of time in the shipper side. So, the price that we receive from the big companies to operate the loads. And another spot where this price is negotiated, like load per load, so, on the spot like the name indicates. And this then because of the nature of these prices, then this and the nature of the demand, which is very expected on the contract and is very stochastic when it comes to spot.

Then this reflects on the nature of the pricing phenomenon that we are trying to approach. Deal with this, this economy is hard, also with the mutations, the dynamic nature of the market, which is different from geography to geography. So, Europe is a very regional business where 

00:32:33

each country have their own legislation on how to run road logistics. And this is something different. For instance, when you compare to US, which is much more homogenous, let’s say this way, this also poses a problem on, to learn at scale how to build good pricing engines. So, we designed a dialogue that is able to learn. So, typically, so algorithms on these types of problems, machine algorithms end up focusing a lot on outliers. So, they really try to avoid to have one price that goes very far away, which is also good.

So, for us in a sense, if you would imagine, because we don’t want to have that outlier will make us really look bad to the shipper, lose a big margin. But then what happened is because the phenomenon is so stochastic. If we try to do it, so, our pricing as a whole will be very off, we’ll be sacrificing most of the points just to go after 1, 2, 3, 4. So, what we rather do is we divide the learning regime in two components. One where we say, okay, are we able to give a price to this load? Yes or no? Okay? And then if we say yes, then we have another machine learning model that says, okay, this is the price for this load. So, this is something that we see in a lot of ML applications that simply like we have a question, we throw a question to ML model, and regardless how well or how badly the model was trained to answer that specific question, it’ll always provide an answer.

00:34:20

So, we end up dividing this, we up having some sort of notion of confidence where we say, okay, so this point we trust our model to make a bridge at this point, and that we allow us to be automated on this, on this load. And then if we don’t, there is no price to this on the marketplace and this needs to be bided that this load needs to be bided in order to be sold, which is fine, okay? But then allow us to be much more aggressive on the machine learning model and say for instance, okay, ignore outliers and for instance, change the loss function of the machine learning algorithm to be very much more focused on the normal points and on the absolute error. And with these be able to build a more accurate model for a smaller percentage, smaller, let’s say 80%, 90% of the points rather to try to be exact fully or 100% of the points. So, this is a subtile change, but I would say for us was something that really made the difference in the end of demand on the profit, an automated profit that this algorithm could drive to the company.

DD: So, you effectively have two machine learning models, one which decides whether or not you’re going to come up with a price, and then the other algorithm sets the price?

LMM: Perhaps we have something a bit more complex than that, but you can see it this way as two engines, one that does the price itself and then something before that says, are we even pricing these loads 

00:36:04

automatically? But you can see it this way as two models that are doing that. Yeah.

DD: Okay. Great. What is the positive impact? You’ve talked about the company clearly automates this process. Is there an impact to the customers as well of this approach?

LMM: So, we leave tough times economically.

DD: Yeah.

LMM: As a consequence of the hard times, we live as a society, economically things are shaky and are not good supply and chain businesses suffer a lot from that. However, Sennder keeps growing their business and their revenue massively, I would say and organically year on year. And this year was no exception to that. So, one thing that this year changed when facing like last year is this growth came together with a large growth on the usage of our marketplace. And this increase of usage on marketplace also came at the same moment that a large increase of the usage of the volume of sales made throughout our pricing, or with the price drove by our algorithm also happened. So, I really believe that these three factors are connected, and this can only happen if our customers are happy with our platform. Otherwise, they’ll not keep coming to buy in the marketplace and the shippers will not keep coming to throw loads on the marketplace.

00:37:44

So, I would say that positive impact in there, if you want, in terms of numbers and magnitude, it depends on geographies. But in term of business overall, we can be talking in some places of a growth of 30, 40% year on year, a lot of places even more. And when it comes to the usage of marketplace, 50% are the numbers on some geographies. And if you are talking about, for instance, the profit achieved through sales solely based on the machine learning algos that we have, the recommender and the dynamic pricing algo, we are talking from the last year to this year, of 5X, so 500% difference. So, this is, I believe [inaudible 00:38:34] [over speaking]. That’s yeah, put a signature on the answer you were searching for.

DD: That’s seriously impressive. So, and I guess the old school way of doing things is still there in the background. So, if your price wasn’t competitive to the brokers that they can call up, they wouldn’t use your platform.

LMM: Yeah. Absolutely. So, not all the business, I think you understood this already is driven through the platform. So, we do have a lot of what you call more traditional business. So, we have still a considerable part of the business that is done with a human loop. That percentage is going down by the day, that’s why we are here, but it’s still exists and this is a component that will always exist. So, one thing that I want to demystify as well, they use the opportunity to do that is that we are not designing AI 

00:39:29

algorithms to replace humans. What we are trying to do is to be able, to enable humans to be more productive. So, right now, a human operator is able to process 30 to 50 loads a day. What we want is that these same human operators will be able to process 500 or 1000, okay?

And there is no way this person will be able to do that without the support of AI and machine learning algorithms that automates some decision making in the process. And then allow these operators just to step in when a problem arises. And by doing so, we are actually elevating the profession of these people so they can actually get, for instance, better salaries because they’re being responsible of driving much more revenue per headcount. So, I think it’s also high contribution to society and the way we live throughout technology that we’re trying to achieve here. And I would like to highlight that.

DD: Yeah. So, you mentioned couple of things that the recommender engine then, which is a separate product. Do you want to quickly give us the overview of that? Because I think that’s unique in that it’s a B2B recommender engine. Is that correct?

LMM: That’s correct. That’s correct. So, the main use case is right after login without any context, you pull up 3 to 10 loads that you are alight to your carrier to say, these are the loads that you want to have. The thing is that 

00:40:59

we are recommending business to a certain business. So, every single action you do on your marketplace will have an effect on your reputation as a company that will not only affect that specific sale, but all the sales you trying to do in the future with this partner. And so, the customer lifetime value here is a complete different ball game. And when you are doing a recommendation, you need to be aware of that. So, if you sense that the load is problematic, if you sense that this is something that can cause any sort of friction, so do not recommend it, let human deal with it, even if the load will be a good match, this carrier then grab the load and then will have problems.

And there was no human to talk about that. So, it was an automated sell, a tough sell that went wrong, and you don’t want that. The tough sells, you leave it to the phone, you leave it to the emails, you leave it to the, to other manual brokerage that can walk through the carriers, say something like, oh, I can sell you this, but be aware of this, this, and that, and then ask questions and go back and forth on this. Not because you’ll not be able to sell that load, but that probably by selling that load to that carrier without any human in the loop, you’ll probably lose that carrier. You have to be very careful how you build the algo, how you build your loss function, how you build success criteria of the entire lifetime value of your customer rather than just sell these items.

00:42:40

DD: The transaction.

LMM: Yeah. The transaction, which is like what all the books and all the system design examples on the internet that you find about recommended systems will teach you how to do very well. But most imaginary ways, but in the end, all the same way and all the same problems when you’re facing a use case like this. And look, this is like the type of challenge that really make me wake up in the morning and come to work with my teams zone because they’re really different, they’re really challenging and they have really the power to change the world around us. Yeah.

DD: And what did you do then to make the recommender system work and be able to understand that this is the type of opportunity we should be recommending, and these are the ones that we should just leave?

LMM: Like in price, it’s very important to understand our successful session looks like, how we determine success. So, the way we construct, for instance, the labels that we will need to train such algo must taking consideration. Success, a definition of success can be something that spans throughout time and that is, goes way beyond the moment where the transaction happens or not, okay? And that is like absolutely key to make the algo, where having success, well-defined will help then everybody to understand is the algorithm working or not? How the 

00:44:08

engineers can define loss functions and evaluations functions for their algo, both before launching and then after launching. How we’ll do experimentation with these algos. So, definition of success is very important as well, the features.

So, not only the features, the data sources that you use there being able to look to operational data, of course, data about the load, the description about the load, description about the carriers, but also looking like how users and carriers behave on the marketplace. So, sometimes intent can be shown not only by clicking a button, there are series of, do I spend a lot of time on a page or not? Do I pull my mouse over a certain item or not? How long we do that? Do I move my mouse fast or slow? So, these are things that can show intent or not, that can show I’m really considered these or not, that can be used to be recommended. But I would say that the main thing is really how to define success. And then on experimentation, of course, how you are able to minimize the amount of mistakes that you need to do to understand what works and what doesn’t work.

And there really passes by avoiding everything that is frequentist approach to AB testing, which is like 95% of the pages. That’s if you search AB testing on Google, we’ll talk to you about, it’s about that and go more for things like sequential AB testing, page and AB testing, or even 

00:45:49

multi-armed bandits, which are alternatives to traditionally AB testing to do experimentation that will allow you to minimize the number of [inaudible 00:46:10] times that basically you test the version B of your experiment. Which is the one that doesn’t optimize your profit or doesn’t optimize the greatness that you are running for versus like class competency that just says, okay, run both in parallel throughout some time let it burn and then we see what it works. Yeah. Of course. But by that time let’s finish the experiment, we can also close the company because half of the customers are already gone.

DD: Excellent. So, Luis, how do you go from zero over a three-year period to something as advanced as your pricing algorithm?

LMM: So, an ML system in real world is much, much, much more than a machine learning model. And you have to start by understanding that probably the first or the baseline system that you’ll have to do pricing on an automated way will not be ML, or be something very rudimentary as ML. Because there is a multitude of problems that you need to solve in order to go live, and you need to be very solid on those in order then to risk more advanced. So, one thing, the ML is always something that propel, it’s like a rocket. If you have a car, okay, if your car is good, every component of a car is good and you put a rocket in the back, that rocket 

00:47:29

will make your car fly. But [laughter] if your car, you now imagine there’s a really old cars, okay, that really make those strange noises.

So, if you touch a rocket in the back of it, most likely you’ll not do 500 meters on it. The car will fall apart as soon as we ignite the rocket. So, ML is the rocket, make no mistake, but before you get the rocket, you need the car and you need to fix the car. There are five problems that typically you need to solve when you go to design a new, so, a green leaf ML project, okay? So, mostly the problem statement, how to go from business problem to machine learning problem and how success looks like. This is absolutely key. So, second is methodology. What you typically know as machine learning modeling. So, which data you’ll use, which features you use, which label you use, which algorithm you use, which loss function. Then third offline evaluation. How we determine that this model is good enough to go live.

Do we have a baseline to compare it? Something really simple, really intuitive to the business? I can give you an example. So, on our recommender for instance, this was just like putting forward ordering the loads for the most recent one entered to the oldest one, on pricing would just go to the same lane and see which is the latest price practice on this lane, this is our price now. Things very simple. You will be surprised how well and how competitive these baselines can be towards a machine 

00:49:08

learning method. That I can tell you if machine led metal doesn’t outperform these largely, these baselines, we’ll go with the baseline every day of the week because they are easier to maintain, easier to understand, easier to interpret. So, it’ll be way easier to operate. So, most probably like you’ll make more money with that baseline than with ML model that will take more attention and more effort to maintain.

So, we really need to have a big surplus for you to go with ML. Then the architecture. So, how your data is served to your model, how your data is served to train your model, how automated is your process to train the model, how automated your process to deploy your model and how your model is served once it’s live. And then finally, live evaluation. So, once your model is live or your service is live, how we determine it that it’s working. And this involves experimentation, but it involves also live monitoring and so on. And you need to get these five things right. If you don’t get these five things right and basically the ML modeling I would say, is the least important of all these, this is the last one. All these five you can risk of having a model that doesn’t scale, of having a model, but you don’t know if it works or throwing a model, just the sake throwing a model that you don’t know exactly how better it’ll be of having something very simple.

So, that’s how you accommodate undesirable tech depth. You have the model just the sake of having a model which happened in many 

00:50:42

companies, say we need to use ML too, okay. Which sometimes why you need to use ML to read that on a magazine or like, so, what you need is to have a component that does this automatically. So, why you believe that should be ML or not you need might with a clear benefit. So, once you have these five well-defined, or these four well-defined everything but methodology, then you can fix the other four and say, okay, this is clear, this is working right? And then start with the rating on the methodology, okay? Most likely when the problems, what happened is like once you started trading on methodology, you really want to change everything on methodology.

You realize you may have to change slightly the problem statement, or the architecture in order to get that type of data source, or that type of data that is only available live is all available a certain way and you iterate on other components as well, okay? So, the rockets, the car can endure the rocket not longer, okay? And not collapse after like 500 meters, one kilometer or three kilometers, but run a long run with it. But it can start iterating. And this is also a pattern that we see in for us, we saw on our pricing model where we started linear model, then we went for a more classic ensemble of decision trees and then now this, we have something more advanced, still ensemble of decision trees, but a slightly more 

 

00:52:12

advanced. Then we go for a multi-model approach. We’re talking about that.

And then eventually in the future we go from here beyond to something else. And this, once when you go to see an industry real cases so, of success, of application, of ML models to real business, okay? So, here I am with the start excluding like generative AI and ChatGPT from this because ML has been around for many years and make impact in the business way before the hype, we are observing with Genei AI. And the good example of that is the Amazon example, with their model of retail volume of sales forecasting retail for each product, which they started in the early in 2007, 2008, 2009, with a very simple linear model. And then throughout the years you go to see they passed time series model, several of season trees, more complex assemble of season trees, then another network until a deep learning model today that uses transformers in its inception.

But they took like more than 10 years to come from something very simple to the super complex outlook that they have today. And probably it’ll not stop here, it’ll continue evolve as their data evolve, as their business evolve, as their data model evolves, their models will also continue to evolve in complexity. So, this is, it’s really important to understand that 

00:53:47

we’ll not go to the optimal solution, but that we have all these considerations to have on green leafy projects. Yeah.

DD: Great. That’s a fantastic overview, Luis, thank you. One thing you mentioned earlier was the fact that 30% of kilometers that the freight industry goes through in Europe is with empty lorries effectively. I think that’s, is that the right way to describe it? So, that obviously has a huge environmental footprint. Is what Sennder is doing helping reduce that kind of wastage?

LMM: Yeah. Our mission is fast forward road logistics and of course until now we have been talk a lot about how to drive tech adoption on this industry that was purely analog. And we talk how to do that with software engineering, but also how to do that with AI. But this is just a part of our vision. So, for us it’s also very important to be able to do that in a sustainable fashion using again, the metaphor of the car. So, we can see the tech adoption part as the rocket, but if the car is not solid, so, the car will not hold the rocket and just to 500 meters, very far, 500 meters, but then we will stop hard. So, we want to contribute to the road freight industry as a whole, make it sustainable. We even then another on our vision, two other pillars for that.

00:55:25

One is become an employer of choice, taking care of our employees and being able to be a human first company and keep our employee net promoter score high. But then also, which is the percentage of green miles and which is the percentage of CO2 that our business is emitting? And of course, want to maximize the first and minimize the second. In this one obviously AI is a big player here. Not only by doing smooth less assignment of the loads between shippers and carriers on our marketplace, but also going a step forward. So, there is a slight variation of our business model, which is, we’d call it the chartering variation.

Where basically what happens is we approach a carrier and they do also a contract with the carrier. Instead of selling each load individually, reach to a carrier and say something like, look, I want you to run your drivers and your trucks to run to me one million kilometers on the next, I don’t know, three months, six months at this price in these routes. So, then instead of us selling each load individually to that carrier, we basically operate their assets for them. And we say, set the scheduling for the drivers, set the scheduling for the trucks and really say the trucks drive here, rest here, fuel there, pickup load here and so on. Only having we believe of having this control, first, we let the carriers focus on what matters most for them and what they do best, which is drive their trucks.

 

00:57:18

So, it’s the ultimate customer experience for them. But second, we’ll be able also to have a holistic optimization of the truck fleet across Europe. This is where ultimately like it’s a planning challenge, not a predictive analytics challenge. So, inside artificial intelligence, we have seven challenges, and this is like a type of planning challenge, and this is why on the next year we are setting one step forward on the personalization of our AI approach to this problem by setting a dedicated team that will only be looking to these challenges of next load suggestion, network optimization at AI level that will count with a dedicated team with dedicated resources just to look to this problem with dedicated product owner and manager.

So, looking into that with experts that been doing this for a while on these domains, on other domains, but that can approach what is the state of art methods or even devise proprietary technology inside those family of methods to address this problem, either with most classical approach for operation research such as mix integral programming or with more modern enforcement learning approaches to solve this challenge. And this is the ultimate game because this is the thing that will allow us to put this whole vision together holistically, okay. I would say that this is like, it’s big step towards the end of the light in the end of the tunnel for us in the company vision.

 

00:59:02

DD: Yeah. And that is a hardcore maths optimization problem. For sure it’s scheduling, it’s automated scheduling, but at a very sophisticated level, effectively.

LMM: It’s tough problem. I would say it did, it used to be a heavy maths problem. But today modern reinforcement learning approaches have been proving like be more sophisticated dealing with stochastic nature of this. So, basically what happened is like the planning has to change because the nature of the road networks and the demand is changing. So, there is one assignment when the vehicle departs, but probably 8 hours later the assignment of vehicle is different because there are other loads that we were asked. In the meanwhile, there is a truck that broke down somewhere and we need to fit for that. And then this is like a game of chess.

DD: Yeah.

LMM: Where we play with our pieces, but there is an adversary as well that we need to play in function of. So, and there is where reinforcement learning also can do very well on deciding, okay, which is the type of policy that I should adopt each and every time to do routing here and with that, yeah. Be really smart and learn throughout time how to optimize planning and how to optimize the broad logistics and the supply chain overall in Europe. 

01:00:31

That’s the ultimate game. That’s what Sennder wants to be a top player or the top player on raw logistics in Europe but we believe that that’s the pattern and all that.

DD: And I’m afraid that concludes today’s episodes. Luis, thank you so much for joining us today. It was an absolute pleasure talking to you.

LMM: Likewise.

DD: Thank you. Thank you so much. That was really illuminating and inspiring to hear what you’re doing at Sennder. Just very quickly before we leave you, I wanted to quickly mention our magazine, the Data Scientist. It’s packed full of insights on a wide variety of data and AI topics and you can subscribe to that for free@datasciencetalent.co.uk/ media. And there is a summarized version of this podcast in written form, which you should check out in our magazine. So, thanks again to Luis. Thank you also to you for listening. Do check out our other episodes@datascienceconversations.com and we look forward to having you with us on the next show.