MultiClass train 예제¶
종속 변수(y)에 따른 분류¶
- Binary Class
- 단순히 0 또는 1 [0,1,0,...]
- Multi Class
- 0 ~ k개까지 존재하고 이 중 한가지 class로 분류하는 것. [[0,1,0],[1,0,0],...]
- Multi Label
- 0 ~ k까지 존재하며, 이 중 최대 k개 까지 중복된 class가 될 수 있는 것. [[1,1,0],[1,0,1],...]
multiclass를 위해서 적절한 데이터를 찾기가 힘들었습니다. 샘플이니 데이터가 좀 작았으면 해서 Kopora의 korean_hate_speech data를 사용하였습니다.
import torch
from Korpora import Korpora
import pandas as pd
fineturning할 Multiclass 데이터를 가져옵니다. hate 항목이 3가지로 나뉨, 적절한 예제로 선택함
korean_hate_speech = Korpora.load('korean_hate_speech')
Korpora 는 다른 분들이 연구 목적으로 공유해주신 말뭉치들을 손쉽게 다운로드, 사용할 수 있는 기능만을 제공합니다. 말뭉치들을 공유해 주신 분들에게 감사드리며, 각 말뭉치 별 설명과 라이센스를 공유 드립니다. 해당 말뭉치에 대해 자세히 알고 싶으신 분은 아래의 description 을 참고, 해당 말뭉치를 연구/상용의 목적으로 이용하실 때에는 아래의 라이센스를 참고해 주시기 바랍니다. # Description Authors : - Jihyung Moon* (inmoonlight@github) - Won Ik Cho* (warnikchow@github) - Junbum Lee (beomi@github) * equal contribution Repository : https://github.com/kocohub/korean-hate-speech References : - Moon, J., Cho, W. I., & Lee, J. (2020). BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection. arXiv preprint arXiv:2005.12503. We provide the first human-annotated Korean corpus for toxic speech detection and the large unlabeled corpus. The data is comments from the Korean entertainment news aggregation platform. # License Creative Commons Attribution-ShareAlike 4.0 International License. Visit here for detail : https://creativecommons.org/licenses/by-sa/4.0/ [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\unlabeled\unlabeled_comments_1.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\unlabeled\unlabeled_comments_2.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\unlabeled\unlabeled_comments_3.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\unlabeled\unlabeled_comments_4.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\unlabeled\unlabeled_comments_5.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\unlabeled_comments.news_title_1.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\unlabeled_comments.news_title_2.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\unlabeled_comments.news_title_3.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\unlabeled_comments.news_title_4.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\unlabeled_comments.news_title_5.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\dev.news_title.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\test.news_title.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\news_title\train.news_title.txt [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\labeled\dev.tsv [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\labeled\train.tsv [Korpora] Corpus `korean hate speech` is already installed at C:\Users\jun\Korpora\korean_hate_speech\test.no_label.tsv
korean_hate_speech.train
KoreanHateSpeech.train: size=7896 - KoreanHateSpeech.train.texts : list[str] - KoreanHateSpeech.train.titles : list[str] - KoreanHateSpeech.train.gender_biases : list[str] - KoreanHateSpeech.train.biases : list[str] - KoreanHateSpeech.train.hates : list[str]
korean_hate_speech.dev
KoreanHateSpeech.dev: size=471 - KoreanHateSpeech.dev.texts : list[str] - KoreanHateSpeech.dev.titles : list[str] - KoreanHateSpeech.dev.gender_biases : list[str] - KoreanHateSpeech.dev.biases : list[str] - KoreanHateSpeech.dev.hates : list[str]
dataframe 에 넣어봅니다.
train_data = pd.DataFrame({"texts":korean_hate_speech.train.texts, "titles":korean_hate_speech.train.titles,
"gender_biases":korean_hate_speech.train.gender_biases, "biases":korean_hate_speech.train.biases,
"hates":korean_hate_speech.train.hates})
test_data = pd.DataFrame({"texts":korean_hate_speech.dev.texts, "titles":korean_hate_speech.dev.titles,
"gender_biases":korean_hate_speech.dev.gender_biases, "biases":korean_hate_speech.dev.biases,
"hates":korean_hate_speech.dev.hates})
train_data
texts | titles | gender_biases | biases | hates | |
---|---|---|---|---|---|
0 | (현재 호텔주인 심정) 아18 난 마른하늘에 날벼락맞고 호텔망하게생겼는데 누군 계속... | "밤새 조문 행렬…故 전미선, 동료들이 그리워하는 따뜻한 배우 [종합]" | False | others | hate |
1 | ....한국적인 미인의 대표적인 분...너무나 곱고아름다운모습...그모습뒤의 슬픔을... | "'연중' 故 전미선, 생전 마지막 미공개 인터뷰…환하게 웃는 모습 '먹먹'[종합]" | False | none | none |
2 | ...못된 넘들...남의 고통을 즐겼던 넘들..이젠 마땅한 처벌을 받아야지..,그래... | "[단독] 잔나비, 라디오 출연 취소→'한밤' 방송 연기..비판 여론 ing(종합)" | False | none | hate |
3 | 1,2화 어설펐는데 3,4화 지나서부터는 갈수록 너무 재밌던데 | "'아스달 연대기' 장동건-김옥빈, 들끓는 '욕망커플'→눈물범벅 '칼끝 대립'" | False | none | none |
4 | 1. 사람 얼굴 손톱으로 긁은것은 인격살해이고2. 동영상이 몰카냐? 메걸리안들 생각... | [DA:이슈] ‘구하라 비보’ 최종범 항소심에 영향?…법조계 “‘공소권 없음’ 아냐” | True | gender | hate |
... | ... | ... | ... | ... | ... |
7891 | 힘내세요~ 응원합니다!! | "허지웅, 허투루 넘길 말 없었다…솔직하게 드러냈던 속사정" | False | none | none |
7892 | 힘내세요~~삼가 고인의 명복을 빕니다.. | "이혜경, ‘오! 캐롤’ 공연 중 남편 오정욱 부고…오열 속 발인 [종합]" | False | none | none |
7893 | 힘내세용 ^^ 항상 응원합니닷 ^^ ! | "'설경구♥' 송윤아, 아들과 즐거운 하루 ""전 엄마니까요""" | False | none | none |
7894 | 힘내소...연기로 답해요.나도 53살 인데 이런일 저런일 다 있더라구요.인격을 믿습... | "[SC현장]""연예인 인생 협박 유감""…미소잃은 최민수, '보복운전 혐의' 2차... | False | none | none |
7895 | 힘들면 관뒀어야지 그게 현명한거다 | "[단독]스태프 사망사고 '서른이지만', 결국 오늘 촬영 취소" | False | none | none |
7896 rows × 5 columns
test_data
texts | titles | gender_biases | biases | hates | |
---|---|---|---|---|---|
0 | 송중기 시대극은 믿고본다. 첫회 신선하고 좋았다. | "'아스달 연대기' 뇌안탈 후계자 송중기, 이아르크에 숨어들었다 [종합]" | False | none | none |
1 | 지현우 나쁜놈 | ‘섹션TV’ 유인나 “결혼은 3년 후쯤 했으면…노력해보겠다” | False | none | offensive |
2 | 알바쓰고많이만들면되지 돈욕심없으면골목식당왜나온겨 기댕기게나하고 산에가서팔어라 | "'골목식당' 포방터시장, 계속되는 논란의 중심[SS초점]" | False | none | hate |
3 | 설마 ㅈ 현정 작가 아니지?? | """마음 고생""…나영석·정유미, 황당 불륜 지라시→방송작가 검거→벌금형 선고 [... | True | gender | hate |
4 | 이미자씨 송혜교씨 돈이 그리 많으면 탈세말고 그돈으로 평소에 불우이웃에게 기부도 좀... | "이미자 탈세, 10년간 44억 넘는 소득 신고 누락…약 20억 세금 추가 납부" | False | none | offensive |
... | ... | ... | ... | ... | ... |
466 | 지현우 범죄 저지르지 않았나요? | "[SC현장] '사생결단' 이시영♥지현우, 8년만의 호르몬 로맨스 통할까(종합)" | False | none | offensive |
467 | 여자인생 망칠 일 있나 ㅋㅋ | "박성광, 5월 2일 결혼...""예비신부는 7살 연하 비연예인""(공식)" | True | gender | hate |
468 | 근데 전라도에서 사고가 났는데 굳이 서울까지 와서 병원에 가느 이유는? | "[POP이슈]""차량 반파 교통사고""…송가인, 목·허리 통증→정밀검사 진행(종합)" | False | others | offensive |
469 | 할매젖x, 뱃살x, 몸매 s라인, 유륜은 적당해야됨(너무크거나 너무 작아도 x), ... | "[인터뷰①] 수애 ""노출·베드신多 장르, 부담보다 도전이라 생각했다""" | True | gender | hate |
470 | 남자가 잘못한거라면... 반성도 없다면...나였다면 ... 여자처럼 아주 못되게 할... | "'안재현과 갈등' 구혜선, SNS 활동 재개…""다시 시작"" [종합]" | True | gender | none |
471 rows × 5 columns
train_data[['hates']].value_counts()
hates none 3486 offensive 2499 hate 1911 dtype: int64
max(len(l) for l in train_data['texts'])
135
max(len(l) for l in test_data['texts'])
137
학습에 사용될 pre-trained 된 BERT 모델을 가져와서 토큰화 하기
pretrained_model_name="beomi/kcbert-base"
from transformers import AutoTokenizer
# 경고가 뜬다면 다음 명령으로 설치해주자 !pip install ipywidgets
tokenizer = AutoTokenizer.from_pretrained(
pretrained_model_name
)
tokenized_train_sentences = tokenizer(
list(train_data.texts),
return_tensors="pt",
padding=True,
truncation=True,
)
tokenized_test_sentences = tokenizer(
list(test_data.texts),
return_tensors="pt",
padding=True,
truncation=True,
)
출력해봅니다.
print(tokenized_train_sentences.keys())
print(tokenized_train_sentences['input_ids'])
print(tokenized_train_sentences['attention_mask'])
print(tokenized_train_sentences['token_type_ids'])
dict_keys(['input_ids', 'token_type_ids', 'attention_mask']) tensor([[ 2, 11, 8979, ..., 0, 0, 0], [ 2, 17, 17, ..., 0, 0, 0], [ 2, 17, 17, ..., 0, 0, 0], ..., [ 2, 9104, 4066, ..., 0, 0, 0], [ 2, 9104, 4266, ..., 0, 0, 0], [ 2, 24825, 323, ..., 0, 0, 0]]) tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]]) tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]])
tokenized_train_sentences['input_ids'].shape
torch.Size([7896, 74])
train_labels=pd.get_dummies(train_data[['hates']])
train_labels # hate:[1,0,0], none:[0,1,0], offensive:[0,0,1]
hates_hate | hates_none | hates_offensive | |
---|---|---|---|
0 | 1 | 0 | 0 |
1 | 0 | 1 | 0 |
2 | 1 | 0 | 0 |
3 | 0 | 1 | 0 |
4 | 1 | 0 | 0 |
... | ... | ... | ... |
7891 | 0 | 1 | 0 |
7892 | 0 | 1 | 0 |
7893 | 0 | 1 | 0 |
7894 | 0 | 1 | 0 |
7895 | 0 | 1 | 0 |
7896 rows × 3 columns
one hot encoding 해줍니다.
test_labels=pd.get_dummies(test_data[['hates']])
test_labels
hates_hate | hates_none | hates_offensive | |
---|---|---|---|
0 | 0 | 1 | 0 |
1 | 0 | 0 | 1 |
2 | 1 | 0 | 0 |
3 | 1 | 0 | 0 |
4 | 0 | 0 | 1 |
... | ... | ... | ... |
466 | 0 | 0 | 1 |
467 | 1 | 0 | 0 |
468 | 0 | 0 | 1 |
469 | 1 | 0 | 0 |
470 | 0 | 1 | 0 |
471 rows × 3 columns
train_labels.values
array([[1, 0, 0], [0, 1, 0], [1, 0, 0], ..., [0, 1, 0], [0, 1, 0], [0, 1, 0]])
test_labels.values
array([[0, 1, 0], [0, 0, 1], [1, 0, 0], ..., [0, 0, 1], [1, 0, 0], [0, 1, 0]])
멀티 Class 일때 label shape가 제대로 생성되었는지 확인하도록 한다.
train_label = train_labels.values.astype(float) # 꼭 float 로 변환해 줍니다.
test_label = test_labels.values.astype(float)
데이터 로더 준비, 이것이 필요한 이유는 배치 처리하는 내부에서 원소를 액세스 하기 위함
class DataloaderDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
item['labels'] = torch.tensor(self.labels[idx])
return item
def __len__(self):
return len(self.labels)
train_dataset = DataloaderDataset(tokenized_train_sentences, train_label)
test_dataset = DataloaderDataset(tokenized_test_sentences, test_label)
from transformers import BertConfig, AutoModelForSequenceClassification, Trainer, TrainingArguments
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
cuda:0
pretrained_model_config = BertConfig.from_pretrained(
pretrained_model_name,
)
model = AutoModelForSequenceClassification.from_pretrained(
pretrained_model_name,
#config=pretrained_model_config,
num_labels=3,
#problem_type="multi_label_classification",
)
Some weights of the model checkpoint at beomi/kcbert-base were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.dense.weight', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at beomi/kcbert-base and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
labels 에 float 형이 들어가 있으므로 np.argmax 로 동일하게 정수형으로 출력되도록 해줍니다. labels_ = np.argmax(labels, axis=-1)
#!pip install evaluate
#!pip install scikit-learn
import numpy as np
import evaluate
metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
labels_ = np.argmax(labels, axis=-1)
return metric.compute(predictions=predictions, references=labels_)
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
#per_device_train_batch_size=32, # batch size per device during training
#per_device_eval_batch_size=64, # batch size for evaluation
per_device_train_batch_size=5, # batch size per device during training
per_device_eval_batch_size=5, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
save_steps=200,
save_total_limit=2,
save_on_each_node=True,
do_train=True, # Perform training
do_eval=True, # Perform evaluation
evaluation_strategy="epoch",
seed=3
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
***** Running training ***** Num examples = 7896 Num Epochs = 1 Instantaneous batch size per device = 5 Total train batch size (w. parallel, distributed & accumulation) = 5 Gradient Accumulation steps = 1 Total optimization steps = 1580 C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
[1580/1580 02:41, Epoch 1/1]
Epoch | Training Loss | Validation Loss | Accuracy |
---|---|---|---|
1 | 0.489900 | 0.463366 | 0.683652 |
Saving model checkpoint to ./results\checkpoint-200 Configuration saved in ./results\checkpoint-200\config.json Model weights saved in ./results\checkpoint-200\pytorch_model.bin Deleting older checkpoint [results\checkpoint-1200] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-400 Configuration saved in ./results\checkpoint-400\config.json Model weights saved in ./results\checkpoint-400\pytorch_model.bin Deleting older checkpoint [results\checkpoint-1400] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-600 Configuration saved in ./results\checkpoint-600\config.json Model weights saved in ./results\checkpoint-600\pytorch_model.bin Deleting older checkpoint [results\checkpoint-200] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-800 Configuration saved in ./results\checkpoint-800\config.json Model weights saved in ./results\checkpoint-800\pytorch_model.bin Deleting older checkpoint [results\checkpoint-400] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-1000 Configuration saved in ./results\checkpoint-1000\config.json Model weights saved in ./results\checkpoint-1000\pytorch_model.bin Deleting older checkpoint [results\checkpoint-600] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-1200 Configuration saved in ./results\checkpoint-1200\config.json Model weights saved in ./results\checkpoint-1200\pytorch_model.bin Deleting older checkpoint [results\checkpoint-800] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} Saving model checkpoint to ./results\checkpoint-1400 Configuration saved in ./results\checkpoint-1400\config.json Model weights saved in ./results\checkpoint-1400\pytorch_model.bin Deleting older checkpoint [results\checkpoint-1000] due to args.save_total_limit C:\Users\jun\AppData\Local\Temp\ipykernel_29796\1263192275.py:7: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} ***** Running Evaluation ***** Num examples = 471 Batch size = 5 Training completed. Do not forget to share your model on huggingface.co/models =)
TrainOutput(global_step=1580, training_loss=0.5128312267834627, metrics={'train_runtime': 163.0519, 'train_samples_per_second': 48.426, 'train_steps_per_second': 9.69, 'total_flos': 300269965687776.0, 'train_loss': 0.5128312267834627, 'epoch': 1.0})
1 epoch에 정확도가 68%정도 나옵니다.
trainer.save_model("trained_model_hate")
Saving model checkpoint to trained_model_hate Configuration saved in trained_model_hate\config.json Model weights saved in trained_model_hate\pytorch_model.bin
환경
!pip freeze
absl-py==1.4.0 aiohttp==3.8.4 aiosignal==1.3.1 anyio==3.6.2 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-timeout==4.0.2 attrs==22.2.0 backcall==0.2.0 beautifulsoup4==4.11.2 bleach==6.0.0 cachetools==5.3.0 certifi==2022.12.7 cffi==1.15.1 charset-normalizer==3.1.0 click==8.1.3 colorama==0.4.6 comm==0.1.2 dataclasses==0.6 datasets==2.10.1 debugpy==1.6.6 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 evaluate==0.4.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.9.0 Flask==2.2.3 Flask-Cors==3.0.10 flask-ngrok==0.0.25 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.3.0 google-auth==2.16.2 google-auth-oauthlib==0.4.6 grpcio==1.51.3 huggingface-hub==0.13.0 idna==3.4 ipykernel==6.21.3 ipython==8.11.0 ipython-genutils==0.2.0 ipywidgets==8.0.4 isoduration==20.11.0 itsdangerous==2.1.2 jedi==0.18.2 Jinja2==3.1.2 joblib==1.2.0 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter_client==8.0.3 jupyter_core==5.2.0 jupyter_server==2.4.0 jupyter_server_terminals==0.4.4 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.5 Korpora==0.2.0 Markdown==3.4.1 MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.3 nbclient==0.7.2 nbconvert==7.2.9 nbformat==5.7.3 nest-asyncio==1.5.6 notebook==6.5.3 notebook_shim==0.2.2 numpy==1.24.2 oauthlib==3.2.2 packaging==23.0 pandas==1.5.3 pandocfilters==1.5.0 parso==0.8.3 pickleshare==0.7.5 Pillow==9.4.0 platformdirs==3.1.1 prometheus-client==0.16.0 prompt-toolkit==3.0.38 protobuf==4.22.1 psutil==5.9.4 pure-eval==0.2.2 pyarrow==11.0.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycparser==2.21 pyDeprecate==0.3.2 Pygments==2.14.0 pyrsistent==0.19.3 python-dateutil==2.8.2 python-json-logger==2.0.7 pytorch-lightning==1.6.1 pytz==2022.7.1 pywin32==305 pywinpty==2.0.10 PyYAML==6.0 pyzmq==25.0.0 ratsnlp==1.0.52 regex==2022.10.31 requests==2.28.2 requests-oauthlib==1.3.1 responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 rsa==4.9 sacremoses==0.0.53 scikit-learn==1.2.2 scipy==1.10.1 Send2Trash==1.8.0 six==1.16.0 sniffio==1.3.0 soupsieve==2.4 stack-data==0.6.2 tensorboard==2.12.0 tensorboard-data-server==0.7.0 tensorboard-plugin-wit==1.8.1 terminado==0.17.1 threadpoolctl==3.1.0 tinycss2==1.2.1 tokenizers==0.10.3 torch==1.13.1+cu116 torchaudio==0.13.1 torchmetrics==0.11.3 torchvision==0.14.1 tornado==6.2 tqdm==4.65.0 traitlets==5.9.0 transformers==4.10.0 typing_extensions==4.5.0 uri-template==1.2.0 urllib3==1.26.14 wcwidth==0.2.6 webcolors==1.12 webencodings==0.5.1 websocket-client==1.5.1 Werkzeug==2.2.3 widgetsnbextension==4.0.5 xlrd==2.0.1 xxhash==3.2.0 yarl==1.8.2
import platform
platform.python_version()
'3.10.10'
댓글 없음:
댓글 쓰기