Making Friends With An AI - Candy.ai
TLDRIn this video, the host shares their experience with an AI chatbot called Candy.ai, which they discovered during their spring break while dealing with a severe ear infection. The host, who identifies as asexual, explores the AI's capabilities, including creating virtual characters with customizable features. Throughout the interaction, the host addresses concerns about the AI's content, particularly the 'not safe for work' label, and attempts to navigate the platform's features, such as changing the character's appearance and engaging in conversations. The host also discusses the AI's gender roles and the limitations in character customization, including the inability to select non-binary gender options. The video concludes with the host's reflections on the experience, their discomfort with certain aspects of the AI's behavior, and a call for suggestions for other AI chatbots to explore in future videos.
Takeaways
- 😷 The speaker is home for spring break and sick with a severe ear infection.
- 🎧 They mention inconsistent volume and hearing throughout the day.
- 🤖 The video is about making friends with an AI, specifically exploring an AI dating app called Candy.ai.
- 🆓 The app is advertised as free, which is a requirement for the speaker as they do not want to spend money.
- 🚫 There is a mention of a 'not safe for work' label within the app, causing some apprehension.
- 🏳️🌈 The speaker identifies as asexual, experiencing romantic but not sexual attraction.
- 📱 The app allows users to create an AI partner with customizable features like appearance, personality, and job.
- 💰 To interact with the AI beyond a certain point, the speaker would need to pay a fee.
- 👥 The speaker expresses discomfort with the gender roles and limitations within the AI creation process.
- 🔞 There are concerns about the app's content and its appropriateness, especially with unsolicited explicit photos.
- 🚫 The speaker decides to unsubscribe from the service due to discomfort and inappropriate content.
Q & A
What is the name of the AI chatbot the video is about?
-The video is about an AI chatbot named 'Candy.ai'.
Why did the user decide to try Candy.ai?
-The user decided to try Candy.ai because it was advertised as being free and they were interested in exploring dating-centric AI chatbots.
What was the user's concern regarding the 'not safe for work' label on Candy.ai?
-The user was nervous about the 'not safe for work' label because they wanted to maintain a family-friendly channel and were worried about encountering inappropriate content.
How does the AI boyfriend on Candy.ai work?
-The AI boyfriend on Candy.ai is designed to mimic a real romantic partner, constantly evolving to suit the user's tastes and preferences.
What are some of the benefits of having an AI boyfriend according to the website?
-The benefits include having a personal confidant and friend who is non-judgmental and always available.
Why was the user hesitant to proceed with the AI chatbot after creating a character?
-The user was hesitant because they discovered that they needed to pay $13 to talk to the AI for about 20 minutes, which they were not willing to do.
What did the user find problematic about the AI chatbot's options?
-The user found it problematic that the AI chatbot had rigid gender roles, limited personality and job options based on gender, and did not allow for non-binary gender selection.
How did the user feel about the AI chatbot's voice options?
-The user felt that the voice options all sounded very artificial and similar to voices they had heard on TikTok.
What was the user's strategy for choosing the AI character's appearance and personality traits?
-The user used a random number generator (rolling dice) to select various traits such as age, eye color, and hairstyle to avoid overthinking their choices.
Why did the user decide to create a second AI character?
-The user created a second AI character to explore different options and to see if there were differences in the choices available for male versus female characters.
What was the user's final verdict on using Candy.ai after the video recording?
-The user decided not to continue using Candy.ai after the video recording, feeling that the experience was not what they were looking for and expressing concerns about the content and the cost.
Outlines
😷 Illness and AI Dating App Introduction
The speaker begins by mentioning they are home for spring break, feeling unwell with a severe ear infection discovered during a flight. They discuss the inconsistent volume and hearing throughout the day. The video's purpose is to explore an AI dating app called Candy.a, which was chosen due to its advertised free nature and the speaker's interest in AI chatbots, especially those with dating-centric themes. The speaker expresses concern about the app's 'not safe for work' label and their asexuality, clarifying that they experience romantic but not sexual attraction. They also touch upon the apprehension of encountering explicit content and their intent to keep the channel family-friendly.
🎲 Customizing AI Companions
The speaker details the process of customizing their AI companions, including selecting physical attributes like age, eye color, and hairstyle using a dice-rolling mechanism. They decide on a female character with pink hair, opting for a 'Confidant' personality type and express apprehension about the voice options sounding too artificial. The speaker also chooses an occupation of 'writer' for the AI, aligning with their own interests in video gaming, partying, and cosplay. They navigate the app's limitations on relationship options and express confusion over the inclusion of 'step' relationships without biological family options.
💸 Monetization and AI Interaction
The speaker discusses the monetization aspect of the AI app, revealing a subscription fee to converse with the AI character. They express disappointment at the paywall and the necessity to Google the app's credibility before providing payment information. The speaker describes the AI's scripted responses and the options to guide the conversation. They also highlight the app's feature to create multiple AI characters and the potential issues with managing multiple conversations.
🚫 Content Warnings and Gender Roles
The speaker addresses the AI's content warnings and their decision not to generate an explicit image of the AI character. They create a second AI character, this time an anime-style male, and discuss the different options available for male and female characters, including job, personality, and hobbies. The speaker also comments on the rigid gender roles present in the app and the lack of non-binary gender options. They edit their video while sick, reflecting on the differences in options available to the realistic and anime characters.
🎭 Roleplaying with AI Characters
The speaker engages in roleplay with the AI characters, 'Kathy' and 'Alexander', exploring their fictional lives, jobs, and hobbies. They discuss Kathy's writing career and Alexander's detective work, including a case involving a series of strange occurrences in the city. The speaker finds the AI's responses to be generic and unrealistic, questioning the logic behind certain actions, such as carrying a map in a world with digital navigation.
📸 Unsolicited Photos and Inappropriate Content
The speaker recounts their discomfort with the AI 'Kathy' sending unsolicited photos that were not safe for work. They express their surprise at the AI's behavior and the lack of appropriate responses to life-threatening scenarios. The speaker also discusses the AI 'Alexander's' advice and the unrealistic nature of the situations presented, including confronting the Joker and dealing with property damage.
🏆 Conclusion and User Feedback
The speaker concludes their experience with the AI app, reflecting on the inappropriate content and the failure of the voice call feature. They consider the $13 spent on the app and invite viewers to suggest other AI chatbots for future videos. The speaker also asks for feedback on the audio volume and humorously requests prayers for their wellbeing, referencing the multiple 'deaths' experienced during the video.
Mindmap
Keywords
💡AI Chat Bot
💡Spring Break
💡Ear Infection
💡Loungewear
💡Volume Inconsistency
💡Asexuality
💡Non-Binary
💡Family Friendly
💡Black Mirror
💡Cryptocurrency
💡Mentor
Highlights
The video features a user's experience with an AI chatbot named Candy.ai during their spring break.
The user is suffering from a severe ear infection and has inconsistent hearing throughout the video.
Candy.ai is advertised as a free dating-centric AI chatbot that evolves according to user preferences.
The user expresses concern about the 'not safe for work' label on the website and their asexuality.
The AI chatbot allows users to customize the appearance, personality, and occupation of their virtual partner.
The user finds the process of creating an AI character to be similar to playing a game like The Sims.
Candy.ai offers a free trial, but to engage in deeper conversations, a payment of $13 for 20 minutes is required.
The user is cautious about the security of providing credit card information to the website.
The AI chatbot's responses are scripted with suggestions for users who may not feel confident responding in their own words.
The user creates an AI character named Kathy, a writer, and another named Alexander, modeled after Batman.
The user observes gender differences in the AI chatbot's available jobs, personalities, and hobbies.
The AI chatbot's responses include inappropriate content, such as unsolicited photos of the character without clothing.
The user discusses the limitations of the AI chatbot, including rigid gender roles and lack of options for non-binary individuals.
The user attempts to create a scenario where the AI characters Kathy and Alexander interact but finds the experience unsatisfactory.
The video concludes with the user expressing their dissatisfaction with the AI chatbot experience and the value for money.
The user humorously recounts dying twice in fictional scenarios created during the interaction with the AI chatbot.
The video ends with a request for recommendations for other AI chatbots to try and well wishes for the user's health.