Autonomous chemical research with large language models

0
7


  • Brown, T. et al. in Advances in Neural Information Processing Systems Vol. 33 (eds Larochelle, H. et al.) 1877– 1901 (Curran Associates, 2020).

  • Thoppilan, R. et al. LaMDA: language designs for dialog applications. Preprint at https://arxiv.org/abs/2201.08239 (2022 ).

  • Touvron, H. et al. LLaMA: effective and open structure language designs. Preprint at https://arxiv.org/abs/2302.13971 (2023 ).

  • Hoffmann, J. et al. Training compute-optimal big language designs. In Advances in Neural Information Processing Systems 30016– 30030 (NeurIPS, 2022).

  • Chowdhery, A. et al. PaLM: scaling language modeling with paths. J. Mach. Discover. Res. 24, 1– 113 (2022 ).

  • Lin, Z. et al. Evolutionary-scale forecast of atomic-level protein structure with a language design.(* )Science 379 , 1123– 1130 (2023 ).

    Article
    ADS
    MathSciNet
    CAS
    PubMed

    Google Scholar
    Luo, R. et al. BioGPT: generative pre-trained transformer for biomedical text generation and mining.

  • Brief Bioinform. 23 , bbac409 (2022 ).

    Article
    PubMed

    Google Scholar
    Irwin, R., Dimitriadis, S., He, J. & & Bjerrum, E. J. Chemformer: a pre-trained transformer for computational chemistry.

  • Mach. Discover. Sci. Technol. 3 , 015022 (2022 ).

    Article
    ADS

    Google Scholar
    Kim, H., Na, J. & & Lee, W. B. Generative chemical transformer: neural artificial intelligence of molecular geometric structures from chemical language through attention.

  • J. Chem. Inf. Design. 61 , 5804– 5814 (2021 ).

    Article
    CAS
    PubMed

    Google Scholar
    Jablonka, K. M., Schwaller, P., Ortega-Guerrero, A. & & Smit, B. Leveraging big language designs for predictive chemistry. Preprint at

  • (2023 ).https://chemrxiv.org/engage/chemrxiv/article-details/652e50b98bab5d2055852dde Xu, F. F., Alon, U., Neubig, G. & & Hellendoorn, V. J. A methodical examination of big language designs of code. In

  • Proc. Sixth ACM SIGPLAN International Symposium on Machine Programming 1– 10 (ACM, 2022). Nijkamp, E. et al. CodeGen: an open big language design for code with multi-turn program synthesis. In

  • Proc. 11th International Conference on Learning Representations (ICLR, 2022). Kaplan, J. et al. Scaling laws for neural language designs. Preprint at

  • (2020 ).https://arxiv.org/abs/2001.08361 OpenAI.

  • GPT-4 Technical Report (OpenAI, 2023). Ziegler, D. M. et al. Fine-tuning language designs from human choices. Preprint at

  • (2019 ).https://arxiv.org/abs/1909.08593 Ouyang, L. et al. Training language designs to follow guidelines with human feedback. In

  • Advances in Neural Information Processing Systems 27730– 27744 (NeurIPS, 2022). Granda, J. M., Donina, L., Dragone, V., Long, D.-L. & & Cronin, L. Controlling a natural synthesis robotic with device finding out to look for brand-new reactivity.

  • Nature 559 , 377– 381 (2018 ).

    Article
    ADS
    CAS
    PubMed
    PubMed Central

    Google Scholar
    Caramelli, D. et al. Finding brand-new chemistry with a self-governing robotic platform driven by a reactivity-seeking neural network.

  • ACS Cent. Sci. 7 , 1821– 1830 (2021 ).

    Article
    CAS
    PubMed
    PubMed Central

    Google Scholar
    Angello, N. H. et al. Closed-loop optimization of basic response conditions for heteroaryl Suzuki– Miyaura coupling.

  • Science 378 , 399– 405 (2022 ).

    Article
    ADS
    MathSciNet
    CAS
    PubMed

    Google Scholar
    Adamo, A. et al. On-demand continuous-flow production of pharmaceuticals in a compact, reconfigurable system.

  • Science 352 , 61– 67 (2016 ).

    Article
    ADS
    CAS
    PubMed

    Google Scholar
    Coley, C. W. et al. A robotic platform for circulation synthesis of natural substances notified by AI preparation.

  • Science 365 , eaax1566 (2019 ).

    Article
    CAS
    PubMed

    Google Scholar
    Burger, B. et al. A mobile robotic chemist.

  • Nature 583 , 237– 241 (2020 ).

    Article
    ADS
    CAS
    PubMed

    Google Scholar
    Auto-GPT: the heart of the open-source representative community.

  • GitHub (2023 ). https://github.com/Significant-Gravitas/AutoGPT BabyAGI.

  • GitHub (2023 ). https://github.com/yoheinakajima/babyagi Chase, H. LangChain.

  • GitHub (2023 ). https://github.com/langchain-ai/langchain Bran, A. M., Cox, S., White, A. D. & & Schwaller, P. ChemCrow: enhancing large-language designs with chemistry tools. Preprint at

  • (2023 ).https://arxiv.org/abs/2304.05376 Liu, P. et al. Pre-train, timely, and anticipate: a methodical study of triggering approaches in natural language processing.

  • ACM Comput. Surv. 55 , 195 (2021 ). Bai, Y. et al. Constitutional AI: harmlessness from AI feedback. Preprint at

  • (2022 ).https://arxiv.org/abs/2212.08073 Falcon LLM.

  • TII (2023 ). https://falconllm.tii.ae Open LLM Leaderboard.

  • Hugging Face (2023 ). https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard Ji, Z. et al. Study of hallucination in natural language generation.

  • ACM Comput. Surv. 55 , 248 (2023 ).

    Article

    Google Scholar
    Reaxys

  • (2023 ). https://www.reaxys.com SciFinder

  • (2023 ). https://scifinder.cas.org Yao, S. et al. Respond: synergizing thinking and acting in language designs. In

  • Proc.11 th International Conference on Learning Representations (ICLR, 2022). Wei, J. et al. Chain-of-thought triggering generates thinking in big language designs. In

  • Advances in Neural Information Processing Systems 24824– 24837 (NeurIPS, 2022). Long, J. Large language design assisted tree-of-thought. Preprint at

  • (2023 ).https://arxiv.org/abs/2305.08291 Opentrons Python Protocol API.

  • Opentrons (2023 ). https://docs.opentrons.com/v2/ Tu, Z. et al. Approximate closest next-door neighbor search and light-weight thick vector reranking in multi-stage retrieval architectures. In

  • Proc. 2020 ACM SIGIR on International Conference on Theory of Information Retrieval 97– 100 (ACM, 2020). Lin, J. et al. Pyserini: a python toolkit for reproducible details retrieval research study with thick and sporadic representations. In

  • Proc. 44th International ACM SIGIR Conference on Research and Development in Information Retrieval 2356– 2362 (ACM, 2021). Qadrud-Din, J. et al. Transformer based language designs for comparable text retrieval and ranking. Preprint at

  • (2020 ).https://arxiv.org/abs/2005.04588 Paper QA.

  • GitHub (2023 ). https://github.com/whitead/paper-qa Robertson, S. & & Zaragoza, H. The probabilistic importance structure: BM25 and beyond.

  • Found. Patterns Inf. Retrieval 3 , 333– 389 (2009 ).

    Article

    Google Scholar
    Data Mining.

  • Mining of Massive Datasets (Cambridge Univ., 2011). Johnson, J., Douze, M. & & Jegou, H. Billion-scale resemblance search with GPUs.

  • IEEE Trans. Big Data 7 , 535– 547 (2021 ).

    Article

    Google Scholar
    Vechtomova, O. & & Wang, Y. A research study of the result of term distance on question growth.

  • J. Inf. Sci. 32 , 324– 333 (2006 ).

    Article

    Google Scholar
    Running experiments.

  • Emerald Cloud Lab (2023 ). https://www.emeraldcloudlab.com/guides/runningexperiments Sanchez-Garcia, R. et al. CoPriNet: chart neural networks supply fast and precise substance cost forecast for particle prioritisation.

  • Digital Discov. 2 , 103– 111 (2023 ).

    Article

    Google Scholar
    Bubeck, S. et al. Stimulates of synthetic basic intelligence: early try outs GPT-4. Preprint at

  • (2023 ).https://arxiv.org/abs/2303.12712 Ramos, M. C., Michtavy, S. S., Porosoff, M. D. & & White, A. D. Bayesian optimization of drivers with in-context knowing. Preprint at

  • (2023 ).https://arxiv.org/abs/2304.05341 Perera, D. et al. A platform for automated nanomole-scale response screening and micromole-scale synthesis in circulation.

  • Science 359 , 429– 434 (2018 ).

    Article
    ADS
    CAS
    PubMed

    Google Scholar
    Ahneman, D. T., Estrada, J. G., Lin, S., Dreher, S. D. & & Doyle, A. G. Predicting response efficiency in C– N cross-coupling utilizing artificial intelligence.

  • Science 360 , 186– 190 (2018 ).

    Article
    ADS
    CAS
    PubMed

    Google Scholar
    Hickman, R. et al. Atlas: a brain for self-driving labs. Preprint at

  • (2023 ).https://chemrxiv.org/engage/chemrxiv/article-details/64f6560579853bbd781bcef6

  • LEAVE A REPLY

    Please enter your comment!
    Please enter your name here