¡°ÀÌ Ã¥¿¡¼ ´Ù·ç´Â ±â¼úÀ» 10³â µÚ¿¡µµ »ç¿ëÇÏ´Â »ç¶÷Àº ¾Æ¹«µµ ¾øÀ»Áö ¸ð¸¥´Ù. ÇÏÁö¸¸ ±â¼úÀ» ÅëÇØ ÀÌ Ã¥¿¡¼ ¼Ò°³ÇÏ°íÀÚ ÇÏ´Â ±â¼ú °³³ä(Technology Concept)À» Àß ÀÌÇØÇÏ´Â »ç¶÷Àº 10³â µÚ¿¡µµ ±× ½Ã±âÀÇ ÃֽŠ±â¼úÀ» ´©±¸º¸´Ù Àß »ç¿ëÇÏ°í ÀÖÀ» °ÍÀÌ´Ù.¡±
ÀÌ·¯ÇÑ °üÁ¡¿¡¼ µ¥ÀÌÅÍ °úÇÐ ºÐ¾ßÀÇ ¿¬±¸¸¦ ¼öÇàÇÏ°í ³í¹®À» ÀÛ¼ºÇϱâ À§ÇØ µµ¿òÀÌ µÉ ¼ö ÀÖ´Â ±â¼úµéÀ» ¼±Á¤ÇÏ¿´À¸¸ç, ÀÌ ±â¼úµé¿¡ ´ëÇÑ ÇÙ½É °³³ä ¹× ½ÇÁ¦ Àû¿ë °æÇèÀ» ¿ä¾àÇÏ¿© º» µµ¼·Î Á¤¸®ÇÏ°Ô µÇ¾ú½À´Ï´Ù. [01. Pre-trained Language Model]Àº ±âÁ¸¿¡ ÃàÀûµÈ Áö½ÄÀ» È°¿ëÇÏ¿© »õ·Î¿î ¹®Á¦¸¦ ÇØ°áÇÏ´Â ¹æ¹ýÀ» ¼Ò°³ÇÏ°í ÀÖ½À´Ï´Ù. [02. Attention]¿¡¼´Â ¹æ´ëÇÑ Á¤º¸ Áß ¹®Á¦ ÇØ°á¿¡ µµ¿òÀÌ µÇ´Â Á¤º¸¸¸À» ´õ¿í ÁýÁßÇÏ¿© ¹Þ¾ÆµéÀÌ´Â ¹æ¹ýÀ» ÀÍÈú ¼ö ÀÖ½À´Ï´Ù. [03. Autoencoder]´Â ÀÚ½ÅÀ» ÀڽŴä°Ô ¸¸µå´Â ÇÙ½É ¿ä¼Ò¸¦ ÃßÃâÇÏ´Â Ãß»óÈ °úÁ¤À» ºñÁöµµ ÇнÀ(Unsupervised Learning)¿¡ Àû¿ëÇß´Ù´Â Á¡ÀÌ ¸Å¿ì Èï¹Ì·Ó½À´Ï´Ù. [04. Knowledge Distillation]Àº ÇöÀçµµ ´Ù¾çÇÑ ¸ðµ¨ÀÌ Á¦¾ÈµÇ°í ÀÖ´Â ÃֽŠ±â¼ú·Î, ¡®Ã»Ãâ¾î¶÷¡¯À» ±¸ÇöÇÒ ¼ö ÀÖ´Â ½Ç¸¶¸®¸¦ Á¦½ÃÇÏ°í ÀÖ¾î ÁÖ¸ñÇØ¾ß ÇÕ´Ï´Ù. [05. Topic Modeling]Àº Àΰ£ÀÌ Áö½ÄÀ» ±â·Ï, °øÀ¯, ½ÀµæÇÏ´Â °¡Àå ´ëÇ¥ÀûÀÎ µµ±¸°¡ ÅؽºÆ®ÀÓÀ» °¨¾ÈÇϸé, ÇâÈÄ¿¡µµ ´Ù¾çÇÑ ºÐ¾ß¿¡¼ ²ÙÁØÈ÷ »ç¿ëµÉ °ÍÀ¸·Î ±â´ëÇÕ´Ï´Ù. ¸¶Áö¸·À¸·Î [06. ÆÄÀ̽㠽ǽÀȯ°æ ±¸Ãà]Àº ¿¬±¸ÀÚÀÇ ¾ÆÀ̵ð¾î¸¦ ÄÚµå·Î ±¸ÇöÇÒ ¼ö ÀÖ´Â °¡Àå È¿À²ÀûÀÎ µµ±¸ Áß ÇϳªÀÎ ÆÄÀ̽㠽ǽÀȯ°æÀ» ´Ù·ç°í ÀÖ½À´Ï´Ù.
01 Pre-trained Language Model
1. »çÀüÇнÀ ¾ð¾î ¸ðµ¨
1-1 ¾ð¾î ¸ðµ¨
1-2 ´ëÇ¥ÀûÀÎ »çÀüÇнÀ ¾ð¾î ¸ðµ¨
2. Transformer
2-1 Transformer ¼Ò°³
2-2 Encoder
2-3 Decoder
2-4 Transformer ÇнÀ
3. BERT
3-1 BERT ±âº» °³³ä
3-2 BERT ±¸Á¶
3-3 BERT ÇнÀ ¹æ½Ä
3-4 BERT »çÀüÇнÀ
4. BERT ½Ç½À
4-1 ½ÇÇè ȯ°æ
4-2 µ¥ÀÌÅÍ Áغñ
4-3 µ¥ÀÌÅÍ Àüó¸® ÈÆ·Ã ¼Â
4-4 µ¥ÀÌÅÍ Àüó¸® Å×½ºÆ® ¼Â
4-5 ¸ðµ¨ »ý¼º
4-6 ¸ðµ¨ ÇнÀ
4-7 Å×½ºÆ® ¼Â Æò°¡
02 Attention
1. ¾îÅÙ¼Ç(Attention) °³³ä
1-1 ¾îÅÙ¼Ç ¸ÞÄ¿´ÏÁò(Attention Mechanism)
1-2 ¾îÅÙ¼Ç(Attention) µîÀå ¹è°æ
1-3 ¾îÅÙ¼Ç(Attention) ÇÔ¼ö
1-4 ¾îÅÙ¼Ç(Attention) µ¿ÀÛ °úÁ¤
2. ¾îÅÙ¼Ç(Attention) Á¾·ù
2-1 Dot-Product Attention
2-2 Scaled Dot-Product Attention
2-3 Bahdanau Attention
2-4 Sparse Attention
3. ¾îÅÙ¼Ç(Attention) È°¿ë
3-1 ±â°è ¹ø¿ª(Machine Translation)
3-2 ¾îÅÙ¼Ç ½Ç½À
03 Autoencoder
1. ¿ÀÅäÀÎÄÚ´õ
1-1 ¿ÀÅäÀÎÄÚ´õ
1-2 ¿ÀÅäÀÎÄÚ´õ Ư¡
1-3 ±âº»ÀûÀÎ ¿ÀÅäÀÎÄÚ´õ ½Ç½À
2. ¿ÀÅäÀÎÄÚ´õ Á¾·ù
2-1 Denoising Autoencoder
2-2 Sparse Autoencoder
3. ¿ÀÅäÀÎÄÚ´õ È°¿ë ºÐ¾ß
3-1 Â÷¿ø Ãà¼Ò
3-2 ¿ÀÅäÀÎÄÚ´õ ±â¹Ý Â÷¿ø Ãà¼Ò ½Ç½À
3-3 ºÐ·ù
3-4 ºÐ·ù¸¦ À§ÇÑ ¿ÀÅäÀÎÄÚ´õ ½Ç½À
3-5 ÀÌ»ó ŽÁö
3-6 ¿ÀÅäÀÎÄÚ´õ ±â¹Ý ÀÌ»ó ŽÁö ½Ç½À
04 Knowledge Distillation
1. Áö½Ä Áõ·ù(Knowledge Distillation)
1-1 ÀüÀÌ ÇнÀ VS Áö½Ä Áõ·ù
1-2 Áö½Ä Áõ·ù Çʿ伺
1-3 KD ÇÙ½É ¿ë¾î
1-4 KD ÄÚµå
2. ´Ù¾çÇÑ KD
2-1 FitNet
2-2 Teacher Assistant
3. ´Ù¾çÇÑ ¸ðµ¨ °æ·®È ¹æ¹ý
3-1 °¡ÁöÄ¡±â(Pruning)
3-2 ¾çÀÚÈ(Quantization)
05 Topic Modeling
1. ÅäÇȸ𵨸µ(Topic Modeling)
1-1 ÅäÇȸ𵨸µ ¹ßÀü °úÁ¤
1-2 LDA °³³ä
1-3 LDA ½Ç½À (Àüó¸®)
1-4 LDA ½Ç½À (ÄÚµå ±¸Çö)
1-5 LDA ½Ç½À (½Ç½À °á°ú)
2. Dynamic Topic Modeling (DTM)
2-1 DTM °³³ä
2-2 DTM ½Ç½À (Àüó¸® ¹× ÄÚµå ±¸Çö1)
2-3 DTM ½Ç½À (ÄÚµå ±¸Çö 2)
2-4 DTM ½Ç½À (½Ç½À °á°ú)
3. Author Topic Modeling (ATM)
3-1 ATM °³³ä
3-2 ATM ½Ç½À(ÄÚµå ±¸Çö)
3-3 ATM ½Ç½À(½Ç½À °á°ú)
06 ÆÄÀ̽㠽ǽÀȯ°æ ±¸Ãà
1. ÄÚ·¦ ¼³Ä¡
1-1 Colab ¼³Ä¡
1-2 Colab ½ÇÇà
2. ȯ°æ ¼³Á¤
2-1 ·±Å¸ÀÓ À¯Çü º¯°æ
2-2 Å׸¶ ¼³Á¤
2-3 Çà ¹øÈ£ Ç¥½Ã
2-4 ¸ðµå ¼³Á¤
3. À¯¿ëÇÑ ±â´É
3-1 Colab°ú Google Drive ¿¬µ¿
3-2 ÆÄÀÏ°ú Æú´õ°ü¸®
3-3 ÇÑ±Û Ã³¸®