
Hello from August. I’m Diana Kimball Berlin, a partner at Matrix leading concept through Series A rounds in B2B SaaS and AI startups. I work with companies like Liveblocks, Meshcapade, Accord, and Infinity.
Here are five fragments that stuck with me last week…
Up until three months ago, the bots had the ability to be more creative, intuitive and multi-dimensional. They were being curtailed to meet the writing needs of the people who used them, and they were able to reach farther and deeper than the now (more often than not) shallow pools of creative incentive. In a world where we can be so preoccupied with mundane brain-rot material, imagination needs as much energy and opportunity as it can get.
– Reddit user on r/CharacterAI, August 4, 2024. As an observer of the Character.AI community, I wanted to see how they were holding up in the wake of the founders and much of the team joining Google. A comment deep in the thread mentions the shift (“The original devs jumped ship and a lawyer is the CEO. Your attempt at appealing to reason is noble, but you're dealing with a lawyer and his investors, and the only thing CAI is going to turn into due to catering children and high revenue from Apple and Android is AI Cocomelon, and there's nothing we can do about that.”), but the primary anguish is about the content filters getting stronger and stronger.
An enjoyable, well-flowing, and trustworthy conversation requires a vibrant, conscientious, civil, and non-neurotic chatbot. Interestingly, there is no strong preference towards low artificiality in terms of trustworthiness. There is even a slight preference for higher artificiality.
– “Chatbots with Attitude: Enhancing Chatbot Interactions Through Dynamic Personality Infusion,” July 2024, by Nikola Kovačević, Tobias Boschung, Christian Holz, Markus Gross, and Rafael Wampfer in ACM Conversational User Interfaces 2024 (CUI ’24). Thinking about personality engineering and what’s next for AI companionship and found this (recent!) paper where the researchers “infused” different blends of the Big Five personality traits into chatbot responses by having GPT-4 rewrite the raw responses from the model through the lens of the designated personality. They go on to point out that higher artificiality is seen by users as evidence of a chatbot sticking to the rules, which can be desirable in a transactional conversation (as with an AI support agent). But I think Character.AI’s fans would agree that “sticking to the rules” puts a damper on more creative conversation types, or any attempt at real relationship-building.
How does one engineer prespecified, coherent behavior from the cooperation of immense numbers of unreliable parts that are interconnected in unknown, irregular, and time-varying ways?
– “Amorphous Computing,” 1999, by Harold Abelson, Don Allen, Daniel Coore, Christopher P. Hanson, George Homsy, Thomas F. Knight, Jr., Radhika Nagpal, Erik Rauch, Gerald Jay Sussman, and Ron Weiss. Connection to LLMs. Found via this tweet pointing to Bret Victor’s references page; they’re listed alphabetically, so I ended up reading this one first and was struck by the parallels between “amorphous computing” and the relative black box of LLMs. What does it take to extract predictable results from a fuzzy system?
Four years later with some work under my belt, and a clearer idea of who I was, I did make many good friends in my field. But they will never replace my first friends who thought I was special from the start and who believed (on some inexplicable faith) that I would do good things. My most valuable and constructive professional criticism has come from these friends—friends who were not in my field, but were in my “court”.
– Radhika Nagpal, “The Awesomest 7-Year Postdoc or: How I Learned to Stop Worrying and Love the Tenure-Track Faculty Life,” Scientific American blog network, July 21, 2013. Radhika’s name caught my eye in the list of authors on the amorphous computing paper quoted above, so I looked her up and found my way to this post and now I so badly want to meet her. Can you believe these robofish? “Not in your field, but in your court” is also a good description of what I try to bring to the table as an early-stage investor.
How do architects design to fill today’s parking needs, knowing that demand may be drastically reduced within a project’s life cycle [by driverless cars]? One way is to design parking so the space can be repurposed easily for other uses. “We’re telling clients now, if you’re building parking, definitely build above grade, so you can do adaptive reuse,” says [Andy] Cohen. Gensler designed a headquarters in downtown Cincinnati for a data analytics firm named 84.51° that included three levels of parking that could be converted into office space, with a facade that matches the rest of the building.
– Patrick J. Kiger, “Designing for the Driverless Age,” Urban Land, July 23, 2018. I’m visiting Gensler’s headquarters in San Francisco soon and came across this article while doing my research. Published six years ago, it’s fascinating to see how quickly the questions have gone from fanciful to urgent, with Waymo blanketing San Francisco—and becoming my transit mode of choice overnight.
Until next time,
Diana
https://dianaberlin.com