The exhibition booth for the International Committee for the Red Cross at the AI+ military defence conference in Washington definitely stands out – that’s the whole point.
“When technology makes it into a battlefield, it's going to have consequences,” Jonathan Horowitz, legal adviser to the Red Cross, said at the ICRC’s exhibit, which focused on the potential problems in the use of artificial intelligence conflicts.
The ICRC's booth at the Walter E Washington Convention Centre was surrounded by exhibits from some of the world’s most influential companies with US military defence contracts, along with other entities from around the world. Palantir, Lockheed Martin, Booz Allen Hamilton, Google and Microsoft all had a large presence at the AI+ event.

The three-day conference was organised by the Special Competitive Studies Project, which describes itself as a group that “seeks to recapture the competitive mindset and unifying national mission from past eras, and then adapt them to the age of AI and 21st-century strategic rivalry”.
Proponents of AI on the battlefield say that it can help minimise casualties and enhance capabilities, but critics say the technology is far from perfect, missing nuances that get lost in the fog of war and often including developers' biases.
Critics also point out that the fast-developing implementations of AI in military landscapes have the potential to disregard international standards.
"Just because technology is new doesn't mean you can use it in unconstrained ways," Mr Horowitz said. "We want to remind people here what those rules are, and you can find the rules in the Geneva Conventions, in international humanitarian law."
He said that while the ICRC is concerned about the use of AI in war, it also sees its potential in improving the lives of civilians amid conflict.

"It could give militaries better awareness of where hospitals or critical infrastructure are located, and with that knowledge militaries should know not to attack those locations," he explained.
Yet the concern raised in recent months is that once AI platforms are handed over to various militaries, there's little accountability for how they're used.
Microsoft recently carried out an internal review in response to recent accusations that its AI technology was being used to harm civilians in the war in Gaza. While it found "no evidence" its products were being used in such a way by the Israeli military, it pointed out the potential for the software and platforms to be used on highly secure, independent military networks of militaries, limiting the company's investigation.
The company is not alone in facing criticism over how its AI technology is used. Alphabet-owned Google, Scale AI and Palantir have faced similar accusations.
At the AI+ conference, demonstrators echoed concerns over the potential harm the technology poses to civilians, particularly in Gaza, with protesters interrupting various speeches and panel discussions.
Mr Horowitz added that in the months ahead, ICRC will continue working to "solve the puzzle" with AI and militaries to "for the embedment of civilians, who often are the ones that suffer the most in armed conflict."
"We include a new set of recommendations on AI decision support systems and the top of our priorities is the need to retain human control and judgment within those systems," Mr Horowitz said of the ICRC's updated guidance on the technology.
The organisation recently submitted its AI military recommendations to the UN Secretary General.
In that document, the ICRC expresses concern about the use of AI in automatic weapon systems along with the implementation of AI to expedite military decision-making.
It also seeks to raise awareness about the potential for AI to increase the speed at which misinformation and disinformation spreads, potentially contributing to and even encouraging violence.
"This submission is intended to support states in ensuring that military applications of AI comply with existing legal frameworks and, where necessary, in identifying areas where additional legal, policy or operational measures may be required," the document concludes.