top of page
Search

AI – what works, regulation, trust and speed

  • farah674
  • 3 days ago
  • 7 min read

From People Tech Maritime Bergen - Odfjell, OSM Thome, Norwegian Maritime Authority, Lars Solbakken and Dimitris Lyras discussed what is working with maritime AI, how to regulate and govern its usage, how to gain trust and find the right speed

 

Using the LLM capability to understand language may be the most useful application for AI in shipping, including in supporting crew to work with procedures, said Gunnar Eide, Manager Digital Applications, Odfjell, speaking at a panel discussion at People Tech Maritime Bergen in November about developments with maritime AI.

 

“That's where we have the most positive feedback,” he said. “Crew told me it has been a game changer. It is so easy now to find the information we need, especially if you are a new chief officer or new chief engineer. In a busy day onboard there's not always time to do these things.”

 

Odfjell has developed AI systems for crew onboard to ‘chat’ with AI about procedures, like about the cargo hose, manoeuvring in port, or gangway handling, to verify and learn how we operate onboard, he said. “We see a huge time saving potential both for crew onboard and onshore to manoeuvre in our procedures.”

 

“Working with the simple things like text, procedures, that has been the biggest game changer.”

 

Text is a comparatively simple form of data for AI, being already structured and contextualised, he said. Using AI on operations technology data from vessels is much harder. It is not structured, not contextualised, and does not all come using the same time series interval. So, you need a data platform and data engineers to get it AI ready. “That is a hard job.”

 

“We are on the way. Once we are there for sure, it will take us places,” he said.

 

OSM Thome is also providing crew with tools containing AI to help them with procedures. “Crew are very happy with it, it is very helpful,” said Lars Austgulen, IT Manager, OSM Thome. “There's a lot of time-consuming tasks we can help crew with.”

 

With AI, “there's a lot of shiny tools out there, a lot is happening very quickly,” he said. “We have to align it with our processes and see where it can give us value.”

 

Another example is AI to predict maintenance on machinery, said Lars Solbakken, consultant and former CIO with shipping company DOF. “You can predict where things are going to break down and replace them. It is tremendously powerful.”

 

What AI can’t do

 

AI so far has mainly contributed to administrative aspects of shipping, said Nils Haktor Bua, Head of New Maritime Technology, Norwegian Maritime Authority.

 

“I mostly deal with operational parts of the ships. If we talk about what's working on the AI side there, I believe it is not that much.”

 

There is also no clear definition of what AI is, and it is a label attached to many different systems. “We have complex systems onboard, but not everything is AI,” he said.

 

AI is good at reading, assimilating and interpreting text, said Dimitris Lyras, director of Paralos Maritime and Ulysses Systems, moderating the discussion. But AI cannot act on what it reads, as a person might do, because we make our decisions based on much more knowledge than the material we have read, and our ability to put it together.

 

For example, if you tell a person that the company has a certain problem, they will keep it at the front of their mind as they do other tasks.

 

AI has not yet managed to replace many people’s jobs, despite the fact that their working models have been well explained, he pointed out.

 

We should be thinking about how we can improve people’s performance and having a better run company, he said. Currently much of the AI focus is on seeing if we can make systems which are more intelligent than people. “This is something that will take a long time to answer, and the answer is probably no.”

 

Ideally, software tools would understand how the enterprise works, as things change every second, and know who needs to be informed about something, Mr Lyras said. “We are looking at AI as though it is mature. It is not mature. It is a 5-year-old kid.”

 

Shipping companies have to do a lot of work to get a LLM to do something useful for them. It isn’t just about putting the right prompt in,” he said.

 

How to regulate AI

 

For regulation and company rules about AI, “I would definitely put accountability on the shipowner because they are the ones that implement it and have the operation,” said Norwegian Maritime Authority’s Mr Bua.

 

We are already working with self-driving ships, remotely controlled and increasingly automated. As a regulator, “we're far from being able to verify such a system. I can't predict if or when we will be ready to do that.”

 

It is much easier to regulate algorithms when they are predictable, but AI is not, he said. “We can manage the data we put in [but] we do not know how the data is processed. It is not easy to say what's coming out, or if that's safe.”

 

“For the high safety risk situations, the verification part of such a system is really difficult.”

 

There are ways you can use AI to support operations without taking undue risk, he said. “You can have innovation without telling the captain not to look out of the window. There are so many steps before you get to the most safety critical parts of the operation. We are a bit too early to discuss. I don't think we have to stop the innovation.”

 

“Maybe at some point we can feel that safety of this system is good enough to be responsible for your function.”

 

Odfjell’s Gunnar Eide countered that regulators could take a more layered approach, with more specific guidelines about what is and isn’t allowed, and who is responsible for what, at each management level. “The government has to set the framework,” he said.

 

“It is a very complex question,” said consultant Lars Solbakken. Consider that society is willing to accept the risk of car accidents. We probably wouldn’t accept the risk if we were not able to pin the blame on someone, normally the driver.

 

With self-driving cars, we would not know who to blame. Yet self-driving cars “will reduce the number of people dead in traffic by 99 per cent because machines are better drivers than people,” he said.

 

Or consider the case of a vessel having an accident caused by a fault of the [human] navigator not reading an important piece of information. This would not have happened with AI based navigation, because the AI tool is able to read all the information.

 

Although this accident scenario could also be considered the fault of the ship manager for putting the crew under information overload, Odfjell’s Mr Eide pointed out. AI tools can help here. “CoPilot can pinpoint what's important and what’s not.”

 

Companies should also think carefully if they plan to prohibit staff from using LLM tools, said OSM Thome’s Lars Austgulen. As people get more accustomed to using AI to help at home, they will want to use AI at work with the same services, unless they are told not to, or provided with tools which are more secure. “People know it will improve their work, get things done faster,” he said.

 

If AI is able to solve a problem, then many people will try to use it and worry afterwards what other problems that may lead to, said Dimitris Lyras.

 

“There are ways to narrow it down, so it does something useful for you that's controllable with not a huge amount of risk,” he said. For example, if you use a private AI to get a summary of correspondence describing a dispute with a charterer, so they do not need to read all the text.

 

How much to trust AI

 

“AI is an assistant, a tool for simple things,” Odfjell’s Mr Eide said. “We have told crew, ‘never trust AI, always have your own opinion, double check what it gives you’. That's part of the education you give crew.”

 

“In order to trust technology in general you need to understand it,” Mr Solbakken said. “The worst scenario is if you have an organisation that doesn't trust it, [but] you have people using it because they have it in their pocket.”

 

As a company, it is important to show employees that you understand it, and you want them to understand it. “That's how I would build trust.”

 

“To see something work is the best place to build trust,” said Mr Haktor Bua from Norwegian Maritime Authority.

 

We get trust in AI by using it where it might work, not trying to do everything with AI, he said. Also recognising that AI is not the best technology for every problem.

 

Speed of development

 

“The maritime industry is a very innovative industry,” said Lars Solbakken. But it “works in cycles of the lifespan of your vessel.”

 

Technology used to work on 5-year cycles, now it can be more like 5-month cycles. “It is difficult to get the entire organisation, or the vessel, or the people, to move at technology speed. There's a lot of things you need to think about.”

 

The best way to move forward is to “put the smartest techies on the board of the company and give them as much money as you can afford and move as fast as possible,” he suggested. Mr Solbakken also stressed that speed must be matched with responsibility, governance, and a clear understanding of operational reality.

 

You will need to figure out how to get value out of AI yourself, you cannot wait for a manual. “Just as parenthood doesn't come with a manual.”

 

“The speed at which Silicon Valley moves is, in my opinion, the speed in which they can make money,” said Mr Lyras. “Silicon Valley only works fast in the area they think they can sell.”

 

“They throw LLMs over the counter because it reads the internet, they think we will use it. They are not thinking about the things we are trying to do.”

 

“The engine manufacturers are the most important technologists in our business. They don’t introduce things quickly.”

 

“I think Silicon Valley have no idea what we are doing,” said Odfjell’s Mr Eide. “We take what they are bringing to the table and see how we can benefit from it. That's where we are right now.”

 
 
 

Comments


bottom of page