Looks like we have a full house today for our virtual meetup:
Language Models in Plain English
Recent advances in machine learning have lowered the barriers to creating and using ML models. But understanding what these models are doing has only become more difficult. In their recent report (https://ibm.biz/BdfLSv), authors Austin Eovito and Marina Danilevsky from IBM focus on how to think about neural network-based language model architectures. They guide you through various models (neural networks, RNN/LSTM, encoder-decoder, attention/transformers) to convey a sense of their abilities without getting entangled in the complex details. The report uses simple examples of how humans approach language in specific applications to explore and compare how different neural network-based language models work.
Join Austin and Marina as they discuss their collaboration in authoring their report. This session will summarize key insights from the report and present important learnings for the data scientist audience. And find out what bats have to do with understanding language models!
Not too late to register. Please try to join a few minutes early so we can start on time. See you there!
------------------------------
Tim Bonnemann
------------------------------
#GlobalAIandDataScience#GlobalDataScience