top of page

Chatting about ChatGPT: how may AI and GPT impact academia and libraries?

Brady D. Lund

References

​

Bishop, C. M. (1994). Neural networks and their applications. Review of Scientific Instruments,65, article 1803. 

Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W.(2016). Openai gym. arXiv. 

Budzianowski, P., & Vulić, I. (2019). Hello, it's GPT-2--how can I help you? towards the use of pretrained language models for task-oriented dialogue systems. arXiv.

Cherian, A., Peng, K. C., Lohit, S., Smith, K., & Tenenbaum, J. B. (2022). Are Deep Neural Networks SMARTer than Second Graders?. arXiv. 

Dale, R. (2017). NLP in a post-truth world. Natural Language Engineering, 23(2), 319-324.Dale, R. (2021). GPT-3 What’s it good for? Natural Language Engineering, 27(1), 113-118.

Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv.

Erhan, D., Bengio, Y., Courville, A., Manzagol, P., & Vincent, P. (2010). Why does unsupervised pre-training help deep learning. Journal of Machine Learning Research, 11, 625-660.

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its nature, scope, limits, and consequences. Minds and Machines, 30(4), 681-694.

Goh, G., Cammarata, N., Voss, C., Carter, S., Petrov, M., Schubert, L., Radford, A., & Olah, C.(2021). Multimodal neurons in artificial neural networks. 

King, M. R. (2022). The future of AI in medicine: A perspective from a chatbot. Annals of Biomedical Engineering. https://doi.org/10.1007/s10439-022-03121-wKirmani, A. R. (2022). Artificial intelligence-enabled science poetry. ACS Energy Letters, 8,574-576.Lee, C., Panda, P., Srinivasan, G., & Roy, K. (2018). Training deep spiking convolutional neural networks with STDP-based unsupervised pre-training followed by supervised fine-tuning. Frontiers in Neuroscience, 12, article 435.

Liu, X., Zheng, Y., Du, Z., Ding, M., Qian, Y., Yang, Z., & Tang, J. (2021). GPT understands,too. arXiv. 

Lucy, L., & Bamman, D. (2021). Gender and representation bias in GPT-3 generated stories. Proceedings of the Workshop on Narrative Understanding, 3, 48-55.

Page 9

MacNeil, S., Tran, A., Mogil, D., Bernstein, S., Ross, E., & Huang, Z. (2022). Generating diverse code explanations using the GPT-3 large language model. Proceedings of the ACM Conference on International Computing Education Research, 2, 37-39.Manning, C., & Schutze, H. (1999). Foundations of statistical natural language processing. MITPress.

Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of DALL-E 2. ArXivpre-print. 

Mollman, S. (2022). ChatGPT gained 1 million users in under a week. 

Niu, Z., Zhong, G., & Yu, H. (2021). A review on the attention mechanism of deep learning. Neurocomputing, 452, 48-62.OpenAI. (2022). OpenAI about page. 

Pavlik, J. V. (2023). Collaborating with ChatGPT: Considering the implications of generative artificial intelligence for journalism and media education. Journalism and Mass Communication Educator. 

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. 

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the Annual Meeting of the Association for Computational Linguistics, 57, 3645-3650.

Zhou, X., Chen, Z., Jin, X., & Wang, W. Y. (2021). HULK: An energy efficiency benchmark platform for responsible natural language processing. Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, 16,329-336


(PDF) Chatting about ChatGPT: How may AI and GPT impact academia and libraries?. Available from: https://www.researchgate.net/publication/367161545_Chatting_about_ChatGPT_How_may_AI_and_GPT_impact_academia_and_libraries [accessed Mar 04 2024].

bottom of page