Can AI help me plan my honeymoon?

In the future, an AI agent could not only suggest things to do and places to stay on my honeymoon; it would also go a step further than ChatGPT and book flights for me. It would remember my preferences and budget for hotels and only propose accommodation that matched my criteria. It might also remember what I liked to do on past trips, and suggest very specific things to do tailored to those tastes. It might even request bookings for restaurants on my behalf. Unfortunately for my honeymoon, today’s AI systems lack the kind of reasoning, planning, and memory needed. It’s still early days for these systems, and there are a lot of unsolved research questions. But who knows—maybe for our 10th anniversary trip?  Deeper Learning A way to let robots learn by listening will make them more useful Most AI-powered robots today use cameras to understand their surroundings and learn new tasks, but it’s becoming easier to train robots with sound too, helping them adapt to tasks and environments where visibility is limited. 

Sound on: Researchers at Stanford University tested how much more successful a robot can be if it’s capable of “listening.” They chose four tasks: flipping a bagel in a pan, erasing a whiteboard, putting two Velcro strips together, and pouring dice out of a cup. In each task, sounds provided clues that cameras or tactile sensors struggle with, like knowing if the eraser is properly contacting the whiteboard or whether the cup contains dice. When using vision alone in the last test, the robot could tell 27% of the time whether there were dice in the cup, but that rose to 94% when sound was included. Read more from James O’Donnell. Bits and Bytes AI lie detectors are better than humans at spotting liesResearchers at the University of Würzburg in Germany found that an AI system was significantly better at spotting fabricated statements than humans. Humans usually only get it right around half the time, but the AI could spot if a statement was true or false in 67% of cases. However, lie detection is a controversial and unreliable technology, and it’s debatable  whether we should even be using it in the first place. (MIT Technology Review)  A hacker stole secrets from OpenAI A hacker managed to access OpenAI’s internal messaging systems and steal information about its AI technology. The company believes the hacker was a private individual, but the incident raised fears among OpenAI employees that China could steal the company’s technology too. (The New York Times) AI has vastly increased Google’s emissions over the past five yearsGoogle said its greenhouse-gas emissions totaled 14.3 million metric tons of carbon dioxide equivalent throughout 2023. This is 48% higher than in 2019, the company said. This is mostly due to Google’s enormous push toward AI, which will likely make it harder to hit its goal of eliminating carbon emissions by 2030. This is an utterly depressing example of how our societies prioritize profit over the climate emergency we are in. (Bloomberg)  Why a $14 billion startup is hiring PhDs to train AI systems from their living roomsAn interesting read about the shift happening in AI and data work. Scale AI has previously hired low-paid data workers in countries such as India and the Philippines to annotate data that is used to train AI. But the massive boom in language models has prompted Scale to hire highly skilled contractors in the US with the necessary expertise to help train those models. This highlights just how important data work really is to AI. (The Information)  A new “ethical” AI music generator can’t write a halfway decent songCopyright is one of the thorniest problems facing AI today. Just last week I wrote about how AI companies are being forced to cough up for high-quality training data to build powerful AI. This story illustrates why this matters. This story is about an “ethical” AI music generator, which only used a limited data set of licensed music. But without high-quality data, it is not able to generate anything even close to decent. (Wired)