Interviews with mathematics education researchers about recent studies. Hosted by Samuel Otten, University of Missouri. www.mathedpodcast.com Produced by Fibre Studios
…
continue reading
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Player FM - Aplicație Podcast
Treceți offline cu aplicația Player FM !
Treceți offline cu aplicația Player FM !
LW - Glitch Token Catalog - (Almost) a Full Clear by Lao Mein
MP3•Pagina episodului
Manage episode 441216886 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Glitch Token Catalog - (Almost) a Full Clear, published by Lao Mein on September 22, 2024 on LessWrong.
This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations.
I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here.
I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request.
The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens.
GPT2 tokens with the lowest frequency in OpenWebText
30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 '\xa0' 0 47654 ' \xa0\xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 '\x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 '
srfAttach' 0 11504 ' \xa0 \xa0' 0 39172 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 '\xa0\xa0\xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' \xa0 \xa0 \xa0 \xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 '\n\xa0' 0 8828 '\xa0\xa0\xa0\xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207
' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 20174 ' 裏 ' 0 212 '\x18' 0 211 '\x17' 0 210 '\x16' 0 209 '\x15' 0 208 '\x14' 0 31666 '?????-?????-' 0 207 '\x13' 0 206 '\x12' 0 213 '\x19' 0 205 '\x11' 0 203 '\x0f' 0 202 '\x0e' 0 31957 'cffffcc' 0 200 '\x0c' 0 199 '\x0b' 0 197 '\t' 0 196 '\x08' 0 195 '\x07' 0 194 '\x06' 0 193 '\x05' 0 204 '\x10' 0 45545 ' サーティワン' 0 201 '\r' 0 216 '\x1c' 0 37842 ' partName' 0 45706 ' \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0
\xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 '\x1b' 0 41551 'Downloadha' 0 4603 '\xa0\xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 '\x1d' 0 218 '\x1e' 0 42090 ' TheNitromeFan' 0 192 '\x04' 0 191 '\x03' 0 219 '\x1f' 0 189 '\x01' 0 45544 ' サーティ' 0 5624 ' \xa0' 0 190 '\x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756
'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658
'?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...
…
continue reading
This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations.
I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here.
I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request.
The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens.
GPT2 tokens with the lowest frequency in OpenWebText
30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 '\xa0' 0 47654 ' \xa0\xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 '\x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 '
srfAttach' 0 11504 ' \xa0 \xa0' 0 39172 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 '\xa0\xa0\xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' \xa0 \xa0 \xa0 \xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 '\n\xa0' 0 8828 '\xa0\xa0\xa0\xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207
' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 20174 ' 裏 ' 0 212 '\x18' 0 211 '\x17' 0 210 '\x16' 0 209 '\x15' 0 208 '\x14' 0 31666 '?????-?????-' 0 207 '\x13' 0 206 '\x12' 0 213 '\x19' 0 205 '\x11' 0 203 '\x0f' 0 202 '\x0e' 0 31957 'cffffcc' 0 200 '\x0c' 0 199 '\x0b' 0 197 '\t' 0 196 '\x08' 0 195 '\x07' 0 194 '\x06' 0 193 '\x05' 0 204 '\x10' 0 45545 ' サーティワン' 0 201 '\r' 0 216 '\x1c' 0 37842 ' partName' 0 45706 ' \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0
\xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 '\x1b' 0 41551 'Downloadha' 0 4603 '\xa0\xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 '\x1d' 0 218 '\x1e' 0 42090 ' TheNitromeFan' 0 192 '\x04' 0 191 '\x03' 0 219 '\x1f' 0 189 '\x01' 0 45544 ' サーティ' 0 5624 ' \xa0' 0 190 '\x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756
'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658
'?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...
2437 episoade
MP3•Pagina episodului
Manage episode 441216886 series 3314709
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ro.player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Glitch Token Catalog - (Almost) a Full Clear, published by Lao Mein on September 22, 2024 on LessWrong.
This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations.
I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here.
I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request.
The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens.
GPT2 tokens with the lowest frequency in OpenWebText
30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 '\xa0' 0 47654 ' \xa0\xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 '\x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 '
srfAttach' 0 11504 ' \xa0 \xa0' 0 39172 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 '\xa0\xa0\xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' \xa0 \xa0 \xa0 \xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 '\n\xa0' 0 8828 '\xa0\xa0\xa0\xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207
' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 20174 ' 裏 ' 0 212 '\x18' 0 211 '\x17' 0 210 '\x16' 0 209 '\x15' 0 208 '\x14' 0 31666 '?????-?????-' 0 207 '\x13' 0 206 '\x12' 0 213 '\x19' 0 205 '\x11' 0 203 '\x0f' 0 202 '\x0e' 0 31957 'cffffcc' 0 200 '\x0c' 0 199 '\x0b' 0 197 '\t' 0 196 '\x08' 0 195 '\x07' 0 194 '\x06' 0 193 '\x05' 0 204 '\x10' 0 45545 ' サーティワン' 0 201 '\r' 0 216 '\x1c' 0 37842 ' partName' 0 45706 ' \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0
\xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 '\x1b' 0 41551 'Downloadha' 0 4603 '\xa0\xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 '\x1d' 0 218 '\x1e' 0 42090 ' TheNitromeFan' 0 192 '\x04' 0 191 '\x03' 0 219 '\x1f' 0 189 '\x01' 0 45544 ' サーティ' 0 5624 ' \xa0' 0 190 '\x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756
'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658
'?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...
…
continue reading
This is a collection of every unidentified GPT2 glitch token listed in the third glitch token archaeology post. I was able to find the source of every single one, except for "?????-" and "?????-?????-"[1]. Please tell me if I missed one, or you've discovered one and don't understand where it came from. This isn't meant to be a well-written analysis, just a quick repository of my glitch-hunting observations.
I plan on writing up and categorizing all of these in greater detail in future posts. The first of which is here.
I used OpenWebText, a recreation of GPT2's training data, for all experiments in this post. I tokenized every .gz file in the archive and made a boolean Numpy array of each tokens that was present at least once. This allowed me to quickly identify infrequent tokens in the dataset and pull up the textual context with regular expressions. If there was an issue with overlap, I used a tokenizer-based extraction instead. All data/code available upon request.
The leftmost column is token id, the middle is the token string, and the right column is # of files the token was present in (out of 20610). GPT2 has 50256 total tokens.
GPT2 tokens with the lowest frequency in OpenWebText
30898 'embedreportprint' 0 33434 ' 士' 0 43453 ' SolidGoldMagikarp' 0 1849 '\xa0' 0 47654 ' \xa0\xa0' 0 50009 ' strutConnector' 0 36173 ' RandomRedditor' 0 214 '\x1a' 0 42424 'DragonMagazine' 0 180 ' ' 0 187 ' ' 0 186 ' ' 0 30213 ' externalToEVAOnly' 0 30212 ' externalToEVA' 0 30211 ' guiIcon' 0 185 ' ' 0 30210 ' guiActiveUnfocused' 0 30209 ' unfocusedRange' 0 184 ' ' 0 30202 ' guiName' 0 183 ' ' 0 30905 'rawdownload' 0 39906 'EStream' 0 33454 '龍喚士' 0 42586 ' srfN' 0 25992 ' 裏覚醒' 0 43065 '
srfAttach' 0 11504 ' \xa0 \xa0' 0 39172 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 40240 'oreAndOnline' 0 40241 'InstoreAndOnline' 0 33477 '\xa0\xa0\xa0' 0 36174 ' RandomRedditorWithNo' 0 37574 'StreamerBot' 0 46600 ' Adinida' 0 182 ' ' 0 29372 ' guiActiveUn' 0 43177 'EStreamFrame' 0 22686 ' \xa0 \xa0 \xa0 \xa0' 0 23282 ' davidjl' 0 47571 ' DevOnline' 0 39752 'quickShip' 0 44320 '\n\xa0' 0 8828 '\xa0\xa0\xa0\xa0' 0 39820 '龍 ' 0 39821 '龍契士' 0 28666 'PsyNetMessage' 0 35207
' attRot' 0 181 ' ' 0 18472 ' guiActive' 0 179 ' ' 0 17811 '\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0' 0 20174 ' 裏 ' 0 212 '\x18' 0 211 '\x17' 0 210 '\x16' 0 209 '\x15' 0 208 '\x14' 0 31666 '?????-?????-' 0 207 '\x13' 0 206 '\x12' 0 213 '\x19' 0 205 '\x11' 0 203 '\x0f' 0 202 '\x0e' 0 31957 'cffffcc' 0 200 '\x0c' 0 199 '\x0b' 0 197 '\t' 0 196 '\x08' 0 195 '\x07' 0 194 '\x06' 0 193 '\x05' 0 204 '\x10' 0 45545 ' サーティワン' 0 201 '\r' 0 216 '\x1c' 0 37842 ' partName' 0 45706 ' \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0
\xa0' 0 124 ' ' 0 125 ' ' 0 178 ' ' 0 41380 'natureconservancy' 0 41383 'assetsadobe' 0 177 ' ' 0 215 '\x1b' 0 41551 'Downloadha' 0 4603 '\xa0\xa0' 0 42202 'GoldMagikarp' 0 42089 ' TheNitrome' 0 217 '\x1d' 0 218 '\x1e' 0 42090 ' TheNitromeFan' 0 192 '\x04' 0 191 '\x03' 0 219 '\x1f' 0 189 '\x01' 0 45544 ' サーティ' 0 5624 ' \xa0' 0 190 '\x02' 0 40242 'BuyableInstoreAndOnline' 1 36935 ' dstg' 1 36940 ' istg' 1 45003 ' SetTextColor' 1 30897 'reportprint' 1 39757 'channelAvailability' 1 39756
'inventoryQuantity' 1 39755 'isSpecialOrderable' 1 39811 'soDeliveryDate' 1 39753 'quickShipAvailable' 1 39714 'isSpecial' 1 47198 'ItemTracker' 1 17900 ' Dragonbound' 1 45392 'dayName' 1 37579 'TPPStreamerBot' 1 31573 'ActionCode' 2 25193 'NetMessage' 2 39749 'DeliveryDate' 2 30208 ' externalTo' 2 43569 'ÍÍ' 2 34027 ' actionGroup' 2 34504 ' 裏 ' 2 39446 ' SetFontSize' 2 30899 'cloneembedreportprint' 2 32047 ' "$:/' 3 39803 'soType' 3 39177 'ItemThumbnailImage' 3 49781 'EngineDebug' 3 25658
'?????-' 3 33813 '=~=~' 3 48396 'ÛÛ' 3 34206 ...
2437 episoade
Alle Folgen
×Bun venit la Player FM!
Player FM scanează web-ul pentru podcast-uri de înaltă calitate pentru a vă putea bucura acum. Este cea mai bună aplicație pentru podcast și funcționează pe Android, iPhone și pe web. Înscrieți-vă pentru a sincroniza abonamentele pe toate dispozitivele.