User:Bird flock/Computational chemistry
These are all sub sections for the articles
Split Operator Technique
[edit]How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems.[1] Computational costs are about how much time it takes for computers to caclulate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into 2 different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution. For example:
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, for the following solution for a differential equation:
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
There are ways to reduce this error, which include taking an average of two split equations. Using the above example, it can be done like this.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to impliment, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time trying to find ways to make systems calculated with split operator technique more accurate while minimizing the computational cost. Finding that middle ground of accurate and plausible to calculate is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Field of Applications
[edit]Catalysis
[edit]Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts.[2] Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures.[3] Using these methods, researchers can predict values like activation energy, site reactivity[4] and other thermodynamic properties.[3]
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles.[4] Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets.[3] With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Drug Development
[edit]Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules.[5] Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds.[6] Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals.[7] Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers.[8] Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Computational Chemistry Databases
[edit]Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data.[9] Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data.[9] Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules.[9] Some publicly available chemistry databases include:
- BindingDB: Contains experimental information about protein-small molecule interactions.[10]
- RCSB: Stores publically available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors)[11]
- ChEMBL: Contains data from research on drug development such as assay results.[9]
- DrugBank: Data about mechanisms of drugs can be found here.[9]
Computational Costs in Chemistry Algorithms
[edit]Also see: Computational Complexity
For types of computational complexity classes: List of complexity classes
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system[12]. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency[13]. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
Algorithmic Complexity Examples:
[edit]1. Molecular Dynamics (MD)
[edit]see: Molecular dynamics
Algorithm: Solves Newton's equations of motion for atoms and molecules[14].
Complexity: The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions[15]. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations[16][17].
2. Quantum Mechanics/Molecular Mechanics (QM/MM)
[edit]see: QM/MM
Algorithm: Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.[18]
Complexity: The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations[19]. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region[19]. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
3. Hartree-Fock Method:
[edit]see also: Hartree-Fock Method
Algorithm: Finds a single Fock state that minimizes the energy.
Complexity: NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations[20]. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions[20]. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
4. Density Functional Theory (DFT):
[edit]see also: Density Functional Theory
Algorithm: Investigate the electronic structure (or nuclear structure) (principally the ground state) of many-body systems, in particular atoms, molecules, and the condensed phases.
Complexity: Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix[21]. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling[22]. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
5. Standard CCSD and CCSD(T) Method
[edit]see also: Coupled cluster
Algorithm: CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Complexity:
CCSD: Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation[23].
CCSD(T): With the addition of perturbative triples, the complexity increases to This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations[23].
6. Linear-Scaling CCSD(T) Method
[edit]see also: Coupled cluster
Algorithm: An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Complexity: Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD[23]. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy[23].
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems[24].
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations[25].
Quantum Computational Chemistry
[edit]For the basis of quantum computational chemistry see: Quantum Chemistry
For the electronic structure problem see: Electronic Structure
For a specific recap of quantum computing see: Quantum Computing
Quantum computational chemistry is an emerging field that integrates quantum mechanics with computational methods to simulate chemical systems. Despite quantum mechanics' foundational role in understanding chemical behaviors, traditional computational approaches face significant challenges, largely due to the complexity and computational intensity of quantum mechanical equations. This complexity arises from the exponential growth of a quantum system's wave function with each added particle, making exact simulations on classical computers inefficient. [26]
Efficient quantum algorithms for chemistry problems are expected to have run-times and resource requirements that scale polynomially with system size and desired accuracy. Experimental efforts have validated proof-of-principle chemistry calculations, though currently limited to small systems.
Historical Context for Classical Computational Challenges in Quantum Mechanics
[edit]- 1929: Dirac noted the inherent complexity of quantum mechanical equations, underscoring the difficulties in solving these equations using classical computation.[27]
- 1982: Feynman proposed using quantum hardware for simulations, addressing the inefficiency of classical computers in simulating quantum systems.[28]
Methods in Quantum Complexity
[edit]Qubitization:
[edit]Main Article: Unitary transformation (quantum mechanics)
One of the problems with hamiltonian simulation is the computational complexity inherent to its formation. Qubitization is a mathematical and algorithmic concept in quantum computing to the simulation of quantum systems via Hamiltonian dynamics. The core idea of qubitization is to encode the problem of Hamiltonian simulation in a way that is more efficiently processable by quantum algorithms.[29]
Qubitization involves a transformation of the Hamiltonian operator, a central object in quantum mechanics representing the total energy of a system. In classical computational terms, a Hamiltonian can be thought of as a matrix describing the energy interactions within a quantum system. The goal of qubitization is to embed this Hamiltonian into a larger, unitary operator, which is a type of operator in quantum mechanics that preserves the norm of vectors upon which it acts.[29] This embedding is crucial for enabling the Hamiltonian's dynamics to be simulated on a quantum computer.
Mathematically, the process of qubitization constructs a unitary operator such that a specific projection of proportional to the Hamiltonian H of interest. This relationship can often be represented as , where is a specific quantum state and is its conjugate transpose. The efficiency of this method comes from the fact that the unitary operator can be implemented on a quantum computer with fewer resources (like qubits and quantum gates) than would be required for directly simulating [29]
A key feature of qubitization is in simulating Hamiltonian dynamics with high precision while reducing the quantum resource overhead. This efficiency is especially beneficial in quantum algorithms where the simulation of complex quantum systems is necessary, such as in quantum chemistry and materials science simulations. Qubitization also develops quantum algorithms for solving certain types of problems more efficiently than classical algorithms. For instance, it has implications for the Quantum Phase Estimation algorithm, which is fundamental in various quantum computing applications, including factoring and solving linear systems of equations.
Applications of Qubization in chemistry:
[edit]Gaussian Orbital Basis Sets
Main Article: Basis Set (Chemistry)
In Gaussian orbital basis sets, phase estimation algorithms have been optimized empirically from to where is the number of basis sets. Advanced Hamiltonian simulation algorithms have further reduced the scaling, with the introduction of techniques like Taylor series methods and qubitization, providing more efficient algorithms with reduced computational requirements.
Plane Wave Basis Sets
Main Article: Basis Set (Chemistry)
Plane wave basis sets, suitable for periodic systems, have also seen advancements in algorithm efficiency, with improvements in product formula-based approaches and Taylor series methods.
Quantum Phase Estimation in Chemistry
[edit]For the foundational recap of the quantum fourier transform Quantum Fourier Transform
Main Article: Quantum Phase Estimation Algorithm
Overview
[edit]Phase estimation, as proposed by Kitaev in 1995, identifies the lowest energy eigenstate ( ) and excited states ( ) of a physical Hamiltonian, as detailed by Abrams and Lloyd in 1999. In quantum computational chemistry, this technique is employed to encode fermionic Hamiltonians into a qubit framework.
Brief Methodology
[edit]1. Initialization: The qubit register is initialized in a state , which has a nonzero overlap with the Full Configuration Interaction (FCI) target eigenstate of the system.[30] This state is expressed as a sum over the energy eigenstates of the Hamiltonian , , where represents complex coefficients.
2. Application of Hadamard Gates: Each ancilla qubit undergoes a Hadamard gate application, placing the ancilla register in a superposed state.[30] Subsequently, controlled gates, as shown above, modify this state.
3. Inverse Quantum Fourier Transform: This transform is applied to the ancilla qubits, revealing the phase information that encodes the energy eigenvalues.[30]
4. Measurement: The ancilla qubits are measured in the Z basis, collapsing the main register into the corresponding energy eigenstate based on the probability .[30]
Requirements
[edit]The algorithm requires ancilla qubits, with their number determined by the desired precision and success probability of the energy estimate. Obtaining a binary energy estimate precise to n bits with a success probability necessitates ancilla qubits. This phase estimation has been validated experimentally across various quantum architectures.
Applications of QPEs in chemistry:
[edit]Time Evolution and Error Analysis
[edit]The total coherent time evolution required for the algorithm is approximately .[32] The total evolution time is related to the binary precision , with an expected repeat of the procedure for accurate ground state estimation. Errors in the algorithm include errors in energy eigenvalue estimation (), unitary evolutions (), and circuit synthesis errors (), which can be quantified using techniques like the Solovay-Kitaev theorem.[33]
The phase estimation algorithm can be enhanced or altered in several ways, such as using a single ancilla qubit for sequential measurements, increasing efficiency, parallelization, or enhancing noise resilience in analytical chemistry.[34] The algorithm can also be scaled using classically obtained knowledge about energy gaps between states.
Limitations
[edit]Effective state preparation is needed, as a randomly chosen state would exponentially decrease the probability of collapsing to the desired ground state. Various methods for state preparation have been proposed, including classical approaches and quantum techniques like adiabatic state preparation.[35]
Variational Quantum Eigensolver:
[edit]Main Article: Variational Quantum Eigensolver
Overview:
The Variational Quantum Eigensolver is an innovative algorithm in quantum computing, crucial for near-term quantum hardware.[36] Initially proposed by Peruzzo et al. in 2014 and further developed by McClean et al. in 2016, VQE is integral in finding the lowest eigenvalue of Hamiltonians, particularly those in chemical systems. It employs the Variational method (quantum mechanics), which guarantees that the expectation value of the Hamiltonian for any parameterized trial wave function is at least the lowest energy eigenvalue of that Hamiltonian.[37] This principle is fundamental in VQE's strategy to optimize parameters and find the ground state energy. VQE is a hybrid algorithm that utilizes both quantum and classical computers. The quantum computer prepares and measures the quantum state, while the classical computer processes these measurements and updates the system. This synergy allows VQE to overcome some limitations of purely quantum methods.
Applications of VQEs in chemistry:
[edit]1-RDM and 2-RDM Calculation:
For terminology see: Density Matrix
The reduced density matrices (1-RDM and 2-RDM) can be used to extrapolate the electronic structure of a system.[38]
Ground State Energy Extrapolation:
In the Hamiltonian variational ansatz, the initial state is prepared to represent the ground state of the molecular Hamiltonian without electron correlations. The evolution of this state under the Hamiltonian, split into commuting segments , is given by:
where are variational parameters optimized to minimize the energy, providing insights into the electronic structure of the molecule.
Measurement Scaling:
McClean et al. (2016) and Romero et al. (2019) proposed a formula to estimate the number of measurements ( ) required for energy precision. The formula is given by , where are coefficients of each Pauli string in the Hamiltonian. This leads to a scaling of in a Gaussian orbital basis and in a plane wave dual basis.[39][40] Note that is the number of basis functions in the chosen basis set.
Fermionic Level Grouping:
A method by Bonet-Monroig, Babbush, and O’Brien (2019) focuses on grouping terms at a fermionic level rather than a qubit level, leading to a measurement requirement of only circuits with an additional gate depth of .[41]
Limitations of VQE
While VQE's application in solving the electronic Schrödinger equation for small molecules has shown success, its scalability is hindered by two main challenges: the complexity of the quantum circuits required and the intricacies involved in the classical optimization process. These challenges are significantly influenced by the choice of the variational ansatz, which is used to construct the trial wave function. Consequently, the development of an efficient ansatz is a key focus in current research. Modern quantum computers face limitations in running deep quantum circuits, especially when using the existing ansatzes for problems that exceed several qubits.
Jordan-Wigner Encoding
[edit]Also see: Jordan-Wigner Transformations
Jordan-Wigner encoding is a fundamental method in quantum computing, extensively used for simulating fermionic systems like molecular orbitals and electron interactions in quantum chemistry.[42]
Overview:
In quantum chemistry, electrons are modeled as fermions with antisymmetric wave functions. The Jordan-Wigner encoding maps these fermionic orbitals to qubits, preserving their antisymmetric nature. Mathematically, this is achieved by associating each fermionic creation and annihilation operator with corresponding qubit operators through the Jordan-Wigner transformation:
Where , , and are Pauli matrices acting on the qubit.
Applications of Jordan-Wigner Encoding in Chemistry
[edit]Electron Hopping
Electron hopping between orbitals, central to chemical bonding and reactions, is represented by terms like . Under Jordan-Wigner encoding, these transform as follows:[43]
This transformation captures the quantum mechanical behavior of electron movement and interaction within molecules.[44]
Computational Complexity in Molecular Systems
The complexity of simulating a molecular system using Jordan-Wigner encoding is influenced by the structure of the molecule and the nature of electron interactions. For a molecular system with orbitals, the number of required qubits scales linearly with , but the complexity of gate operations depends on the specific interactions being modeled.
Limitations of Jordan–Wigner Encoding
[edit]The Jordan-Wigner transformation encodes fermionic operators into qubit operators, but it introduces non-local string operators that can make simulations inefficient.[45] The FSWAP gate is used to mitigate this inefficiency by rearranging the ordering of fermions (or their qubit representations), thus simplifying the implementation of fermionic operations.
Fermionic SWAP (FSWAP) Network
[edit]FSWAP networks rearrange qubits to efficiently simulate electron dynamics in molecules.[46] These networks are essential for reducing the gate complexity in simulations, especially for non-neighboring electron interactions.
When two fermionic modes (represented as qubits after the Jordan-Wigner transformation) are swapped, the FSWAP gate not only exchanges their states but also correctly updates the phase of the wavefunction to maintain fermionic antisymmetry.[47] This is in contrast to the standard SWAP gate, which does not account for the phase change required in the antisymmetric wavefunctions of fermions.
The use of FSWAP gates can significantly reduce the complexity of quantum circuits for simulating fermionic systems.[48] By intelligently rearranging the fermions, the number of gates required to simulate certain fermionic operations can be reduced, leading to more efficient simulations. This is particularly useful in simulations where fermions need to be moved across large distances within the system, as it can avoid the need for long chains of operations that would otherwise be required.
This is the sandbox page where you will draft your initial Wikipedia contribution.
If you're starting a new article, you can develop it here until it's ready to go live. If you're working on improvements to an existing article, copy only one section at a time of the article to this sandbox to work on, and be sure to use an edit summary linking to the article you copied from. Do not copy over the entire article. You can find additional instructions here. Remember to save your work regularly using the "Publish page" button. (It just means 'save'; it will still be in the sandbox.) You can add bold formatting to your additions to differentiate them from existing content. |
References
[edit]- ^ Lukassen, Axel Ariaan; Kiehl, Martin (2018-12-15). "Operator splitting for chemical reaction systems with fast chemistry". Journal of Computational and Applied Mathematics. 344: 495–511. doi:10.1016/j.cam.2018.06.001. ISSN 0377-0427.
- ^ Elnabawy, Ahmed O.; Rangarajan, Srinivas; Mavrikakis, Manos (2015-08-01). "Computational chemistry for NH3 synthesis, hydrotreating, and NOx reduction: Three topics of special interest to Haldor Topsøe". Journal of Catalysis. Special Issue: The Impact of Haldor Topsøe on Catalysis. 328: 26–35. doi:10.1016/j.jcat.2014.12.018. ISSN 0021-9517.
- ^ a b c Patel, Prajay; Wilson, Angela K. (2020-12-01). "Computational chemistry considerations in catalysis: Regioselectivity and metal-ligand dissociation". Catalysis Today. Proceedings of 3rd International Conference on Catalysis and Chemical Engineering. 358: 422–429. doi:10.1016/j.cattod.2020.07.057. ISSN 0920-5861.
- ^ a b van Santen, R. A. (1996-05-06). "Computational-chemical advances in heterogeneous catalysis". Journal of Molecular Catalysis A: Chemical. Proceedings of the 8th International Symposium on the Relations between Homogeneous and Heterogeneous Catalysis. 107 (1): 5–12. doi:10.1016/1381-1169(95)00161-1. ISSN 1381-1169.
- ^ Tsui, Vickie; Ortwine, Daniel F.; Blaney, Jeffrey M. (2017-03-01). "Enabling drug discovery project decisions with integrated computational chemistry and informatics". Journal of Computer-Aided Molecular Design. 31 (3): 287–291. doi:10.1007/s10822-016-9988-y. ISSN 1573-4951.
- ^ van Vlijmen, Herman; Desjarlais, Renee L.; Mirzadegan, Tara (2017-03). "Computational chemistry at Janssen". Journal of Computer-Aided Molecular Design. 31 (3): 267–273. doi:10.1007/s10822-016-9998-9. ISSN 1573-4951. PMID 27995515.
{{cite journal}}
: Check date values in:|date=
(help) - ^ Ahmad, Imad; Kuznetsov, Aleksey E.; Pirzada, Abdul Saboor; Alsharif, Khalaf F.; Daglia, Maria; Khan, Haroon (2023). "Computational pharmacology and computational chemistry of 4-hydroxyisoleucine: Physicochemical, pharmacokinetic, and DFT-based approaches". Frontiers in Chemistry. 11. doi:10.3389/fchem.2023.1145974/full. ISSN 2296-2646.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ El-Mageed, H. R. Abd; Mustafa, F. M.; Abdel-Latif, Mahmoud K. (2022-01-02). "Boron nitride nanoclusters, nanoparticles and nanotubes as a drug carrier for isoniazid anti-tuberculosis drug, computational chemistry approaches". Journal of Biomolecular Structure and Dynamics. 40 (1): 226–235. doi:10.1080/07391102.2020.1814871. ISSN 0739-1102.
- ^ a b c d e Muresan, Sorel; Sitzmann, Markus; Southan, Christopher (2012), Larson, Richard S. (ed.), "Mapping Between Databases of Compounds and Protein Targets", Bioinformatics and Drug Discovery, vol. 910, Totowa, NJ: Humana Press, pp. 145–164, doi:10.1007/978-1-61779-965-5_8, ISBN 978-1-61779-964-8, PMC 7449375, PMID 22821596, retrieved 2023-11-19
{{citation}}
: CS1 maint: PMC format (link) - ^ Gilson, Michael K.; Liu, Tiqing; Baitaluk, Michael; Nicola, George; Hwang, Linda; Chong, Jenny (2016-01-04). "BindingDB in 2015: A public database for medicinal chemistry, computational chemistry and systems pharmacology". Nucleic Acids Research. 44 (D1): D1045–1053. doi:10.1093/nar/gkv1072. ISSN 1362-4962. PMC 4702793. PMID 26481362.
- ^ Zardecki, Christine; Dutta, Shuchismita; Goodsell, David S.; Voigt, Maria; Burley, Stephen K. (2016-03-08). "RCSB Protein Data Bank: A Resource for Chemical, Biochemical, and Structural Explorations of Large and Small Biomolecules". Journal of Chemical Education. 93 (3): 569–575. doi:10.1021/acs.jchemed.5b00404. ISSN 0021-9584.
- ^ Modern electronic structure theory. 1. Advanced series in physical chemistry. Singapore: World Scientific. 1995. ISBN 978-981-02-2987-0.
- ^ Adcock, Stewart A.; McCammon, J. Andrew (2006-05-01). "Molecular Dynamics: Survey of Methods for Simulating the Activity of Proteins". Chemical Reviews. 106 (5): 1589–1615. doi:10.1021/cr040426m. ISSN 0009-2665. PMC 2547409. PMID 16683746.
{{cite journal}}
: CS1 maint: PMC format (link) - ^ Durrant, Jacob D.; McCammon, J. Andrew (2011-10-28). "Molecular dynamics simulations and drug discovery". BMC Biology. 9 (1): 71. doi:10.1186/1741-7007-9-71. ISSN 1741-7007. PMC 3203851. PMID 22035460.
{{cite journal}}
: CS1 maint: PMC format (link) CS1 maint: unflagged free DOI (link) - ^ Stephan, Simon; Horsch, Martin T.; Vrabec, Jadran; Hasse, Hans (2019-07-03). "MolMod – an open access database of force fields for molecular simulations of fluids". Molecular Simulation. 45 (10): 806–814. doi:10.1080/08927022.2019.1601191. ISSN 0892-7022.
- ^ Kurzak, J.; Pettitt, B. M. (2006-09). "Fast multipole methods for particle dynamics". Molecular Simulation. 32 (10–11): 775–790. doi:10.1080/08927020600991161. ISSN 0892-7022. PMC 2634295. PMID 19194526.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: PMC format (link) - ^ Giese, Timothy J.; Panteva, Maria T.; Chen, Haoyuan; York, Darrin M. (2015-02-10). "Multipolar Ewald Methods, 1: Theory, Accuracy, and Performance". Journal of Chemical Theory and Computation. 11 (2): 436–450. doi:10.1021/ct5007983. ISSN 1549-9618.
- ^ Groenhof, Gerrit (2013), Monticelli, Luca; Salonen, Emppu (eds.), "Introduction to QM/MM Simulations", Biomolecular Simulations: Methods and Protocols, Methods in Molecular Biology, Totowa, NJ: Humana Press, pp. 43–66, doi:10.1007/978-1-62703-017-5_3, ISBN 978-1-62703-017-5, retrieved 2023-11-21
- ^ a b Tzeliou, Christina Eleftheria; Mermigki, Markella Aliki; Tzeli, Demeter (2022-01). "Review on the QM/MM Methodologies and Their Application to Metalloproteins". Molecules. 27 (9): 2660. doi:10.3390/molecules27092660. ISSN 1420-3049. PMC 9105939. PMID 35566011.
{{cite journal}}
: Check date values in:|date=
(help)CS1 maint: PMC format (link) CS1 maint: unflagged free DOI (link) - ^ a b Lucas, Andrew (2014). "Ising formulations of many NP problems". Frontiers in Physics. 2. doi:10.3389/fphy.2014.00005/full. ISSN 2296-424X.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ Michaud-Rioux, Vincent; Zhang, Lei; Guo, Hong (2016-02-15). "RESCU: A real space electronic structure method". Journal of Computational Physics. 307: 593–613. doi:10.1016/j.jcp.2015.12.014. ISSN 0021-9991.
- ^ Motamarri, Phani; Das, Sambit; Rudraraju, Shiva; Ghosh, Krishnendu; Davydov, Denis; Gavini, Vikram (2020-01-01). "DFT-FE – A massively parallel adaptive finite-element code for large-scale density functional theory calculations". Computer Physics Communications. 246: 106853. doi:10.1016/j.cpc.2019.07.016. ISSN 0010-4655.
- ^ a b c d Sengupta, Arkajyoti; Ramabhadran, Raghunath O.; Raghavachari, Krishnan (2016-01-15). "Breaking a bottleneck: Accurate extrapolation to "gold standard" CCSD(T) energies for large open shell organic radicals at reduced computational cost". Journal of Computational Chemistry. 37 (2): 286–295. doi:10.1002/jcc.24050. ISSN 0192-8651.
- ^ Whitfield, James Daniel; Love, Peter John; Aspuru-Guzik, Alán (2013). "Computational complexity in electronic structure". Phys. Chem. Chem. Phys. 15 (2): 397–411. doi:10.1039/C2CP42695A. ISSN 1463-9076.
- ^ Whitfield, James Daniel; Love, Peter John; Aspuru-Guzik, Alán (2013). "Computational complexity in electronic structure". Phys. Chem. Chem. Phys. 15 (2): 397–411. doi:10.1039/C2CP42695A. ISSN 1463-9076.
- ^ "Mathematical surprises and Dirac's formalism in quantum mechanics". Reports on Progress in Physics. 63 (12). 2000. doi:10.1088/0034-4885/63/12/201.
- ^ "Quantum mechanics of many-electron systems". Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character. 123 (792): 714–733. 1929-04-06. doi:10.1098/rspa.1929.0094. ISSN 0950-1207.
- ^ Feynman, Richard P. (2019-06-17). Feynman Lectures On Computation. Boca Raton: CRC Press. doi:10.1201/9780429500442. ISBN 978-0-429-50044-2.
- ^ a b c Low, Guang Hao; Chuang, Isaac L. (2019-07-12). "Hamiltonian Simulation by Qubitization". Quantum. 3: 163. doi:10.22331/q-2019-07-12-163.
- ^ a b c d Nielsen, Michael A.; Chuang, Isaac L. (2010). Quantum computation and quantum information (10th anniversary edition ed.). Cambridge: Cambridge university press. ISBN 978-1-107-00217-3.
{{cite book}}
:|edition=
has extra text (help) - ^ McArdle, Sam; Endo, Suguru; Aspuru-Guzik, Alán; Benjamin, Simon C.; Yuan, Xiao (2020-03-30). "Quantum computational chemistry". Reviews of Modern Physics. 92 (1): 015003. doi:10.1103/RevModPhys.92.015003.
- ^ Du, Jiangfeng; Xu, Nanyang; Peng, Xinhua; Wang, Pengfei; Wu, Sanfeng; Lu, Dawei (2010-01-22). "NMR Implementation of a Molecular Hydrogen Quantum Simulation with Adiabatic State Preparation". Physical Review Letters. 104 (3). doi:10.1103/PhysRevLett.104.030502. ISSN 0031-9007.
- ^ Lanyon, B. P.; Whitfield, J. D.; Gillett, G. G.; Goggin, M. E.; Almeida, M. P.; Kassal, I.; Biamonte, J. D.; Mohseni, M.; Powell, B. J.; Barbieri, M.; Aspuru-Guzik, A.; White, A. G. (2010). "Towards quantum chemistry on a quantum computer". Nature Chemistry. 2 (2): 106–111. doi:10.1038/nchem.483. ISSN 1755-4349.
- ^ Wang, Youle; Zhang, Lei; Yu, Zhan; Wang, Xin (2022). "Quantum Phase Processing and its Applications in Estimating Phase and Entropies".
- ^ Sugisaki, Kenji; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji (2022-07-25). "Adiabatic state preparation of correlated wave functions with nonlinear scheduling functions and broken-symmetry wave functions". Communications Chemistry. 5 (1): 1–13. doi:10.1038/s42004-022-00701-8. ISSN 2399-3669. PMC 9814591. PMID 36698020.
{{cite journal}}
: CS1 maint: PMC format (link) - ^ Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L. (2014-07-23). "A variational eigenvalue solver on a photonic quantum processor". Nature Communications. 5 (1): 4213. doi:10.1038/ncomms5213. ISSN 2041-1723. PMC 4124861. PMID 25055053.
{{cite journal}}
: CS1 maint: PMC format (link) - ^ Chan, Albie; Shi, Zheng; Dellantonio, Luca; Dur, Wolfgang; Muschik, Christine A (2023). "Hybrid variational quantum eigensolvers: merging computational models".
- ^ Liu, Jie; Li, Zhenyu; Yang, Jinlong (2021-06-28). "An efficient adaptive variational quantum solver of the Schrödinger equation based on reduced density matrices". The Journal of Chemical Physics. 154 (24). doi:10.1063/5.0054822. ISSN 0021-9606.
- ^ Romero, Jonathan; Babbush, Ryan; McClean, Jarrod R; Hempel, Cornelius; Love, Peter J; Aspuru-Guzik, Alán (2018-10-19). "Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz". Quantum Science and Technology. 4 (1): 014008. doi:10.1088/2058-9565/aad3e4. ISSN 2058-9565.
- ^ McClean, Jarrod R; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán (2016-02-04). "The theory of variational hybrid quantum-classical algorithms". New Journal of Physics. 18 (2): 023023. doi:10.1088/1367-2630/18/2/023023. ISSN 1367-2630.
- ^ Bonet-Monroig, Xavier; Babbush, Ryan; O’Brien, Thomas E. (2020-09-22). "Nearly Optimal Measurement Scheduling for Partial Tomography of Quantum States". Physical Review X. 10 (3): 031064. doi:10.1103/PhysRevX.10.031064.
- ^ Jiang, Zhang; Sung, Kevin J.; Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.; Boixo, Sergio (2018-04-26). "Quantum Algorithms to Simulate Many-Body Physics of Correlated Fermions". Physical Review Applied. 9 (4). doi:10.1103/PhysRevApplied.9.044036. ISSN 2331-7019.
- ^ Jiang, Zhang; Sung, Kevin J.; Kechedzhi, Kostyantyn; Smelyanskiy, Vadim N.; Boixo, Sergio (2018-04-26). "Quantum Algorithms to Simulate Many-Body Physics of Correlated Fermions". Physical Review Applied. 9 (4). doi:10.1103/PhysRevApplied.9.044036. ISSN 2331-7019.
- ^ Li, Qing-Song; Liu, Huan-Yu; Wang, Qingchun; Wu, Yu-Chun; Guo, Guo-Ping (2022). "A unified framework of transformations based on the Jordan–Wigner transformation". pubs.aip.org. doi:10.1063/5.0107546. Retrieved 2023-11-13.
- ^ "Custom Fermionic Codes for Quantum Simulation | Perimeter Institute". www2.perimeterinstitute.ca. Retrieved 2023-11-13.
- ^ Kivlichan, Ian D.; McClean, Jarrod; Wiebe, Nathan; Gidney, Craig; Aspuru-Guzik, Alán; Chan, Garnet Kin-Lic; Babbush, Ryan (2018-03-13). "Quantum Simulation of Electronic Structure with Linear Depth and Connectivity". Physical Review Letters. 120 (11): 110501. doi:10.1103/PhysRevLett.120.110501.
- ^ Hashim, Akel; Rines, Rich; Omole, Victory; Naik, Ravi K; John MarkKreikebaum, John Mark; Santiago, David I; Chong, Frederic T.; Siddiqi, Irfan; Gokhale, Pranav (2021). "Optimized fermionic SWAP networks with equivalent circuit averaging for QAOA".
- ^ Rubin, Nicholas C.; Gunst, Klaas; White, Alec; Freitag, Leon; Throssell, Kyle; Chan, Garnet Kin-Lic; Babbush, Ryan; Shiozaki, Toru (2021-10-27). "The Fermionic Quantum Emulator". Quantum. 5: 568. doi:10.22331/q-2021-10-27-568.