News

Given GPT2 tokenizer do not have an internal pad_token_id, how do I pad sentences and do batch inference using GPT2LMHeadModel? Specifically my code as: prompt_text = [ 'in this paper we', 'we are ...