A viral call-recording app called Neon has abruptly gone dark after revealing users' phone numbers, call recordings, and transcripts—a breach that sparked widespread concern about data privacy and app security. This app, which promises to pay users for recording their phone calls and sell data to AI firms, has become one of the top five free iPhone apps since its launch last week. Despite its popularity, the app’s sudden shutdown highlights the risks of data exposure and the vulnerabilities in even seemingly secure applications.
The app, which allows users to generate income by sharing call recordings used to train AI models, faced scrutiny after a critical security flaw was discovered. TechCrunch uncovered the issue during a brief test on Thursday, revealing that unauthorized users could access the personal information of any other user, including phone numbers, call logs, and transcripts. The flaw was traced to the app’s servers failing to restrict access to logged-in users, enabling anyone with the right credentials to view sensitive data.
TechCrunch alerted the app’s founder, Alex Kiam, who had previously ignored requests for comment. Kiam temporarily disabled the app’s servers and notified users about the outage, but failed to disclose the security lapse or the exposure of user data. The app ceased functioning shortly after the discovery, leaving users scrambling to find alternative solutions.
The core issue lies in the app’s design: while it claims to offer a way to monetize call recordings, its underlying architecture allowed for the unauthorized access of user data. TechCrunch conducted a technical audit using tools like Burp Suite, uncovering hidden details such as text-based transcripts and web links to audio files. These findings revealed that users could access raw data, including call records and metadata (e.g., phone numbers, call durations, and earnings per call), which were not visible to regular users.
For example, a test call between two TechCrunch reporters showed that the app’s transcripts contained precise details about the conversation, including the exact words spoken. However, the app’s back-end servers could also generate reams of data about other users, including their most recent calls and transcripts. In one case, TechCrunch found that Neon’s servers could produce public links to audio files and transcripts, exposing users’ conversations to anyone with the right access.
The app’s shutdown raises questions about the balance between innovation and privacy. While Neon aimed to create a new revenue stream for users, its approach inadvertently made user data vulnerable. The email sent to customers by Kiam, which emphasized data privacy as a priority, omitted specific details about the security breach, leaving users unaware of the extent of the problem.
This incident is not isolated. Similar concerns have plagued other apps, such as the dating app Tea, which suffered a data breach exposing 72,000 user images, and Bumble, which was found to allow stalkers to pinpoint users’ locations. Both apps faced scrutiny for failing to meet app store guidelines, highlighting the ongoing challenges of regulating app security.
Kiam’s response to the flaw remains unclear. He did not confirm whether the app underwent a security review before launch or if he had access to logs that might indicate prior breaches. Additionally, neither Upfront Ventures nor Xfund, the investors in Neon, responded to TechCrunch’s inquiries about their involvement.
In a world where data is increasingly valuable, this case underscores the need for transparency and accountability in app development. As users navigate an era of digital privacy concerns, the Neon incident serves as a cautionary tale about the trade-offs between innovation and security. Whether this app will return to the app stores remains uncertain, but the debate over data protection and corporate responsibility is far from over.