cover image


[Pilot] On AI For Creativity - Session 1


"On AI For Creativity" is a series of sharing sessions where we invite experts from various domains (tech, design, finance, education, art and entertainment, etc) to share ideas and work on AI x Creativity. Both online / in-person events will be hosted. In this very first online session, we had two Data Scientists in different fields to share with us on the same topic.


🚀 Talk 1: Introduction To AI-powered Creativity (by Eason)

AI today can do so much: Composing art, music, writing, and even computer codes. The creative tasks that previously take human weeks and months to do, AI may now do so in mins. With the new billion dollar market enabled by Generative Technology, creative people needs to understand how AI-powered creativity works and prepare for its uprising. In this talk, we will discuss the history of development of Generative AI, obtain an intuitive understanding of the technology, its use cases, and how it may shape the future of the Web.

  • Generative Models are generally more data & compute intensive.
  • Generative Models generally suffer from Quality / Diversity trade off.
  • Summarizing different types of data and tasks into one distribution is the dream.
  • To achieve AGI, currently there are 2 focuses: (1)Active Learning and (2)Multimodality.

If fundamental technology (e.g., Generative Tech) can be proliferated:

  • The marginal cost for creativity will be drastically lowered.
  • Rapid paradigmatic change in how we solve problems.
  • Singularity? Better AI -> Better Science -> Better AI -> Better Science...

🚀 Talk 2: Human Feedback in ChatGPT (by Julius)

Most transformer models are pre-trained with some language modeling objectives, and then followed by some fine-tuning steps with supervised training. Generative modeling offers a lot of advantages, it simplifies training by removing the fine-tuning steps, and it allows the model to perform many tasks without changing the architecture, eventually leading to models so powerful and with seemingly endless usage like ChatGPT. The key ingredient that separates ChatGPT from its predecessors is that it incorporated human feedback in the training process with Reinforcement Learning. In this talk, Julius discussed how this mechanism works, and the implications of it.

  • Things that don't come natural or require special techniques to build can't scale, thus we have adopted the generalist approach to LLMs.
  • He mentioned that general generative Model is more practical over pretrained+fine-tuned model (e.g., BERT) because most companies don't have a lot of space to host a lot of fine-tuned models for different tasks.


All Events >