Skip to main content

Alec Koppel


I am an AI Research Lead/VP in the Multiagent Learning and Simulation Group within Artificial Intelligence Research at JP Morgan Chase & Co. Previously, I was a Research Scientist at Amazon as part of Optimal Sourcing Systems (OSS) within Supply Chain Optimization Technologies (SCOT). Before that, I spent four years as a Research Scientist at the U.S. Army Research Laboratory in the Computational and Information Sciences Directorate (Sept. 2017-2021). My research focuses on optimization and machine learning methods for autonomous systems and supply chain problems, especially inventory planning and vendor selection. I am currently pursuing works in:

  • Reinforcement Learning
  • Scalable online Bayesian and nonparametric methods

Previously, I have worked on, and am still interested in:

  • Online Learning and Stochastic Optimization
  • Decentralized Optimization

Generally, I am interested in research questions on learning theory that bridge the gap between theoretically justified and practically useful. If you're a student or recent graduate working in similar areas, or their applications in finance/economics, please reach out to me regarding collaboration opportunities. Before doing so, I strongly suggest you study one or more of my papers and come prepared with some questions or directions you would like to discuss. If you would like to apply for an internship or full-time role, please read the following sentence. Compliance restricts my ability to provide formal employment/internship referrals to those individuals with whom I have a joint collaboration on a research project that leads to a publication or a patent. Therefore, collaboration with me can be a pathway to formal referral, but I cannot make referrals directly for employment to individuals who reach out to me via cold calls. Put succinctly, if you are interested in direct intellectual interaction and research collaboration, I am interested, but employment and internship opportunities are a matter for JPMC Recruitment, which is a branch of HR and not part of the role of scientists.

Prior to my professional roles, I completed my Phd in Electrical and Systems Engineering at the University of Pennsylvania in summer of 2017 -- see my defense presentation which evolved ideas from my proposal on 12/1/2016. My dissertation may be found here. Concurrently, I completed the Master's degree in Statistics from the Wharton School at the University of Pennsylvania. My doctoral work was under the supervision of Alejandro Ribeiro in the area of statistical signal processing, in particular distributed online and stochastic optimization.

I also participated in the Science, Mathematics, and Research for Transformation (SMART) Scholarship program sponsored by the U.S. Department of Defense. My sponsoring facility was the U.S. Army Research Laboratory’s Computation and Information Sciences Directorate, where I collaborated with Brian Sadler, Ethan Stump, and Jon Fink

Before coming to Philadelphia, I had some wonderful research experiences at the U.S. Army Research Laboratory with Alma Wickenden and William Nothwang, and as an undergraduate in the Mathematics Department at Washington University  in St. Louis (WashU) under the guidance of Renato Feres. I completed both my BS and MS at WashU in Mathematics/Systems Science and Mathematics, respectively, in 2011 and 2012.

News

  • Mar. 2024: I will be serving as Area Chair for NeurIPS 2024 focusing on topic areas: reinforcement learning, Bayesian inference, bandit algorithms, numerical optimization.
  • Jan. 2024: Our paper on using kernelized Stein discrepancy (KSD) for online compression of MCMC samplers was accepted to SIMODS.
  • Jan. 2024: Our paper on policy gradient parameterized by heavy-tailed distributions was accepted to the Journal of Machine Learning Research after four years under review!
  • Dec. 2023: Paper on multi-agent RL appeared at NeurIPS with Donghao Ying, Yuhao Ding, Javad Lavaie.
  • Aug. 2023: I will be serving as Area Chair for ICLR 2024.
  • Aug. 2023: My colleague Alan Mishler will present our new work on incorporating Bayesian regularization and causal inference techniques to correct for missingness not at random in supervised and active learning at the 2023 Epistemic Uncertainty in Artificial Intelligence Workshop as part of UAI 2023.
  • Aug. 2023: Our work on Gaussian Processes in the non-stationary regime with adaptive hype- parameters will be presented at 2023 IEEE 19th International Conference on Automation Science and Engineering (CASE).
  • July 2023: Our work on bilevel reinforcement learning as a mathematical model of incorporating human feedback into reinforcement learning systems will be presented at the 2023 ICML Workshop on Interactive Learning with Human Feedback (ILHF).
  • July 2023: Two papers on theoretical aspects of reinforcement learning on which I am co-author have been accepted to 2023 International Conference on Machine Learning.
  • May 2023: I will be serving as Area Chair for NeurIPS 2023.
  • Mar. 2023: I am chairing an invited two-part session on Foundational Advances in Reinforcement Learning at IEEE CISS at Johns Hopkins University in Baltimore, MD.
  • Feb. 2023: I am attending AAAI 2023 in Washington, DC.
  • Jan. 2023: At long last, ``On the Sample Complexity of Actor-Critic Method for Reinforcement Learning with Function Approximation" was accepted to Springer Machine Learning.
  • Dec. 2022: My paper ``Oracle-free Reinforcement Learning in Mean-Field Games along a Single Sample Path" was accepted to 2023 AISTATS.
  • Nov. 2022: My paper ``Posterior Coreset Construction with Kernelized Stein Discrepancy for Model-Based Reinforcement Learning" was accepted to 2023 AAAI.
  • Sep. 2022: I have moved to New York, NY.
  • Jun. 2022: I have begun a new role as an AI Research Lead/Vice President in the Multiagent Learning and Simulation Group within Artificial Intelligence Research at JP Morgan Chase & Co in New York, NY.
  • Jun. 2022: Several under review manuscripts finally made it to print, spanning IEEE Transactions on Robotics, Elsevier Signal Processing, and IEEE Transactions on Signal Processing submitted to an invited session entitled “Distributed optimization and learning for Networked Systems” at 2022 IEEE Conference on Decision and Control.
  • May 2022: Two papers were accepted to 2022 International Conference on Machine Learning (ICML-22) for Spotlight presentations.
  • Mar. 2022: We submitted to an invited session entitled “Distributed optimization and learning for Networked Systems” at 2022 IEEE Conference on Decision and Control.
  • Mar. 2022: Revisions for IEEE TSP, JMLR, IEEE TRO, and Elsevier SIGPRO were completed over the past couple months -- see publications tab for details and updated links on ArXiV.
  • Feb. 2022: Two papers submitted to IEEE International Conference on Intelligent Robotics and Systems (IROS).
  • Jan. 2022: Our paper about policy gradient methods for ratio optimization problems in MDPs got accepted to 2022 IEEE Conference on Information Science and Systems (CISS).
  • Dec. 2021: Two conference papers have been accepted for publication in AAAI 2022 which span multi-agent reinforcement learning with general utilities and ways to solve CMDPs with zero constraint violation!
  • Dec. 2021: Two journal papers have been accepted for publication in IEEE Transactions on Signal Processing which span particle selection in importance sampling and ways to converge to improved limit points in successive convex approximation.
  • Sep. 2021: I have started a new Research Scientist position at Amazon as part of Optimal Sourcing Systems (OSS) within Supply Chain Optimization Technologies (SCOT). I am excited to be learning many new facets of the supply chain!
  • Aug. 2021: Our paper on consistent compressions of Gaussian Process posteriors via greedy subset selection with respect to the Hellinger metric has been accepted for publication in Statistics and Computing (Springer).
  • Aug. 2021: Congratulations to Wesley Suttle (AMCS Phd student at Stony Brook University) for winning the ARL Summer Student Symposium Best Project Award associated with our joint research project on information-theoretic exploration in continuous Markov Decision Problems!
  • Jul. 2021: Three contributions to IEEE Asilomar Conference on Signals, Systems, and Computers have been accepted which span multi-agent reinforcement learning, beamforming, and kernel methods for Poisson process estimation.
  • Jun. 2021: I will present in the ACM Workshop on Reinforcement Learning in Networks and Queues (RLNQ 2021) on June 14 associated with ``MARL with General Utilities via Decentralized Shadow Reward Actor-Critic."
  • May 2021: Our work titled ``Cautious Reinforcement Learning via Distributional Risk in the Dual Domain " on risk-sensitive RL that employs dual LP reformulations is accepted to IEEE JSAIT.
  • May 2021: The following work is selected as a finalist, i.e., Honorable Mention, for Best Paper Award from the IEEE Robotics and Automation Society announced at ICRA 2021: Yulun Tian, Alec Koppel, Amrit Singh Bedi, and Jonathan P. How, "Asynchronous and Parallel Distributed Pose Graph Optimization," in IEEE Robotics and Automation Letters, vol. 5, no. 4, pp. 5819-5826, Oct. 2020
  • May 2021: I virtually attended IEEE ACC and IEEE ICASSP.
  • Apr. 2021: I gave a guest lecture to Engineering 10301: Defense and Security at Purdue University, West Lafayette, IA, April 5, 2021.
  • Mar. 2021: I will deliver a virtual colloquium to the Dept. of ECE at George Washington University, Washington DC, on March 9, 2021.
  • Mar. 2021: I delivered the IEEE SPS Seminar Series on Optimization and Learning at IIT Kanpur, March 5, 2021.
  • Mar. 2021: Two papers have been accepted for publication at IEEE Trans. Signal Processing which span Frank-Wolfe (conjugate gradient) and kernelized optimization of compositional objectives.
  • Feb. 2021: I will serve on the Task Force for the U.S. Army Research Laboratory's Military and Information Sciences Core Competency.
  • Feb. 2021: One paper accepted to 2021 IEEE ICASSP and two papers accepted to 2021 IEEE ACC.
  • Feb. 2021: I will deliver a virtual seminar to Benjamin Van Roy's group at Stanford Dept. of Operations Research, Palo Alto, CA on Feb. 12, 2021.
  • Dec. 2020: I will deliver Control, Dynamical Systems, and Computation Seminar to University of California, Santa Barbara on Dec 11, 2020.
  • Dec. 2020: Our paper about non-asymptotic local superlinear convergence of incremental Quasi-Newton methods will appear as a spotlight in the NeurIPS Optimization for Machine Learning Workshop on Dec. 11, 2020.
  • Nov. 2020: I will be on the program committee of 2021 ICML and L4DC.
  • Nov. 2020: I organized and presented in an invited session at INFORMS Annual Meeting Nov 10-12 titled ``Uncertainty Representations in Bandits and Reinforcement Learning."
  • Oct. 2020: We submitted a paper about splitting procedures in ensembles of local Gaussian Process models to 2021 IEEE International Conference on Robotics and Automation (ICRA) -- see publications page.
    • Oct. 2020: Three papers were submitted to 2021 International Conference on Acoustics, Speech, and Signal Processing (ICASSP) covering kernel methods, point processes, and beamforming -- see publications page.
    • Oct. 2020: Our paper on risk-sensitive reinforcement learning via LP formulations has submitted to IEEE Journal on Selected Areas in Information Theory: Special Issue on Sequential, Active, and Reinforcement Learning.
    • Oct. 2020: Our paper on links between occupancy measures, policy gradients, and risk sensitivity in reinforcement learning has been accepted to 2020 NeurIPS as a spotlight!
    • Sep. 2020: I am on the Program Committee of Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21).
    • Sep. 2020: Two papers were submitted to 2021 American Control Conference (ACC) covering reinforcement learning and beamforming -- see publications page.
    • Sep. 2020: I will participate in the Army Research Office (ARO) Technical Advisor program under the mentorship of Hamid Krim .
    • Aug. 2020: I mentored three students, Dylan Scott (Hampton University), Andi Johnson (Eastern New Mexico University), and Gisselle Contreras-Velarde (UT Arlington) through the OSD HBCU Summer Internship Program, and they presented at the OSD HBCU Summer Symposium.
    • Aug. 2020: Two students, James Berneburg (George Mason University) and Bingjia Wang (Cornell University), completed their summer internships at ARL under my mentorship and presented at the ARL Summer Symposium.
    • Jul. 2020: I will serve as a reviewer for 2021 International Conference on Learning Representations (ICLR2021).
      • Jul. 2020: I will serve as a reviewer for 2020 Neural Information Processing Systems (NeurIPS), and am on the Program Committee of the 2020 ICML Theoretical Foundations of Reinforcement Learning Workshop.
      • Jun. 2020: Our paper on policy gradient methods in reinforcement learning has been accepted for publication at SIAM Journal on Control and Optimization.
      • Jun. 2020: Our paper on policy evaluation using compressed kernel methods has been accepted for publication at IEEE Trans. Automatic Control.
      • Jun. 2020: At long last, our work (submitted in 2016!) on randomized coordinate methods and SGD will appear in the Journal of Machine Learning Research.
      • May 2020: A review paper I wrote spotlighting several recent advances in online Bayesian inference will appear in IEEE Signal Processing Magazine .

      Disclaimer

      The views and opinions expressed in this website are those of the author and do not necessarily reflect the official policy or position of any agency of the U.S. government, Amazon Inc, or JP Morgan Chase & Co.

      Contact

      Skype : aekoppel “The electric things have their life too.
      Cell : 314 303 2399 Paltry as those lives are."
      email : akoppel@seas.upenn.edu
      from "Do Androids Dream of Electric Ship?
      mail: Artificial Intelligence Research
      by Phillip K. Dick 
       

        JP Morgan Chase & Co.

        383 Madison Avenue

        New York, NY 10017