Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
matteobettini committed Dec 19, 2024
1 parent d57ac86 commit 7798e52
Show file tree
Hide file tree
Showing 6 changed files with 14 additions and 7 deletions.
6 changes: 4 additions & 2 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
<a class="btn btn-outline-primary btn-page-header btn-sm" href="https://www.youtube.com/watch?v=1tOIMgJf_VQ" target=_blank rel=noopener>Talk</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://arxiv.org/abs/2312.01472 target=_blank rel=noopener>arXiv</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=/publication/benchmarl/poster.pdf>Poster</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a></div></div></div></div></div></section><section id=publications class="home-section wg-portfolio"><div class=home-section-bg></div><div class=container><div class="row justify-content-center"><div class="section-heading col-12 mb-3 text-center"><h1 class=mb-0>Publications</h1></div><div class=col-12><p>To find relevant content, try <a href=./publication/>searching publications</a> or filtering using the buttons below.</p><span class="d-none default-project-filter">*</span><div class=project-toolbar><div class=project-filters><div class=btn-toolbar><div class="btn-group flex-wrap"><a href=# data-filter=* class="btn btn-primary btn-lg active">All</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://neurips.cc/virtual/2024/poster/98318 target=_blank rel=noopener>NeurIPS</a></div></div></div></div></div></section><section id=publications class="home-section wg-portfolio"><div class=home-section-bg></div><div class=container><div class="row justify-content-center"><div class="section-heading col-12 mb-3 text-center"><h1 class=mb-0>Publications</h1></div><div class=col-12><p>To find relevant content, try <a href=./publication/>searching publications</a> or filtering using the buttons below.</p><span class="d-none default-project-filter">*</span><div class=project-toolbar><div class=project-filters><div class=btn-toolbar><div class="btn-group flex-wrap"><a href=# data-filter=* class="btn btn-primary btn-lg active">All</a>
<a href=# data-filter=.js-id-Heterogeneity class="btn btn-primary btn-lg">Heterogeneity</a>
<a href=# data-filter=.js-id-Multi-Agent-Reinforcement-Learning class="btn btn-primary btn-lg">Multi-Agent Reinforcement Learning</a>
<a href=# data-filter=.js-id-Software-library class="btn btn-primary btn-lg">Software library</a></div></div></div></div><div class="isotope projects-container row js-layout-row"><div class="col-12 isotope-item js-id-Heterogeneity js-id-Multi-Agent-Reinforcement-Learning"><div class=container><div class="row media stream-item"><div class="col col-sm-12 col-lg-8 col-md-7 ml-3 media-body"><div class="section-subheading article-title mb-0 mt-0"><a href=/publication/heterogeneous-teams/>Heterogeneous Teams</a></div><a href=/publication/heterogeneous-teams/ class=summary-link><div class=article-style>The aim of this chapter is to provide an overview of heterogeneous teams, from a robotics perspective.</div></a><div class="stream-meta article-metadata"><div class=article-metadata><div><span><a href=/authors/prorok/>Amanda Prorok</a></span>, <span class=author-highlighted><a href=/authors/admin/>Matteo Bettini</a></span></div><span class=article-date>2024</span>
Expand Down Expand Up @@ -55,7 +56,8 @@
<a class="btn btn-outline-primary btn-page-header btn-sm" href="https://www.youtube.com/watch?v=1tOIMgJf_VQ" target=_blank rel=noopener>Talk</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://arxiv.org/abs/2312.01472 target=_blank rel=noopener>arXiv</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=/publication/benchmarl/poster.pdf>Poster</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a></div></div><div class="col col col-sm-12 col-lg-4 col-md-5 ml-3 mb-3"><a href=/publication/benchmarl/><img src=/publication/benchmarl/compact_hu0971da7d7467813a850f11c8b39be638_3051643_300x0_resize_lanczos_3.png alt="BenchMARL: Benchmarking Multi-Agent Reinforcement Learning" loading=lazy></a></div></div></div></div><div class="col-12 isotope-item js-id-Multi-Agent-Reinforcement-Learning js-id-Software-library"><div class=container><div class="row media stream-item"><div class="col col-sm-12 col-lg-8 col-md-7 ml-3 media-body"><div class="section-subheading article-title mb-0 mt-0"><a href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/>TorchRL: A data-driven decision-making library for PyTorch</a></div><a href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/ class=summary-link><div class=article-style>We propose TorchRL, a generalistic control library for PyTorch that provides well-integrated, yet standalone components. With a versatile and robust primitive design, TorchRL facilitates streamlined algorithm development across the many branches of Reinforcement Learning (RL) and control. We introduce a new PyTorch primitive, TensorDict, as a flexible data carrier that empowers the integration of the library’s components while preserving their modularity. TorchRL fosters long-term support and is publicly available on GitHub for greater reproducibility and collaboration within the research community.</div></a><div class="stream-meta article-metadata"><div class=article-metadata><div><span><a href=/authors/albert-bou/>Albert Bou</a></span>, <span class=author-highlighted><a href=/authors/admin/>Matteo Bettini</a></span>, <span><a href=/authors/sebastian-dittert/>Sebastian Dittert</a></span>, <span><a href=/authors/vikash-kumar/>Vikash Kumar</a></span>, <span><a href=/authors/shagun-sodhani/>Shagun Sodhani</a></span>, <span><a href=/authors/xiaomeng-yang/>Xiaomeng Yang</a></span>, <span><a href=/authors/gianni-de-fabritiis/>Gianni De Fabritiis</a></span>, <span><a href=/authors/vincent-moens/>Vincent Moens</a></span></div><span class=article-date>2024</span>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://neurips.cc/virtual/2024/poster/98318 target=_blank rel=noopener>NeurIPS</a></div></div><div class="col col col-sm-12 col-lg-4 col-md-5 ml-3 mb-3"><a href=/publication/benchmarl/><img src=/publication/benchmarl/compact_hu0971da7d7467813a850f11c8b39be638_3051643_300x0_resize_lanczos_3.png alt="BenchMARL: Benchmarking Multi-Agent Reinforcement Learning" loading=lazy></a></div></div></div></div><div class="col-12 isotope-item js-id-Multi-Agent-Reinforcement-Learning js-id-Software-library"><div class=container><div class="row media stream-item"><div class="col col-sm-12 col-lg-8 col-md-7 ml-3 media-body"><div class="section-subheading article-title mb-0 mt-0"><a href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/>TorchRL: A data-driven decision-making library for PyTorch</a></div><a href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/ class=summary-link><div class=article-style>We propose TorchRL, a generalistic control library for PyTorch that provides well-integrated, yet standalone components. With a versatile and robust primitive design, TorchRL facilitates streamlined algorithm development across the many branches of Reinforcement Learning (RL) and control. We introduce a new PyTorch primitive, TensorDict, as a flexible data carrier that empowers the integration of the library’s components while preserving their modularity. TorchRL fosters long-term support and is publicly available on GitHub for greater reproducibility and collaboration within the research community.</div></a><div class="stream-meta article-metadata"><div class=article-metadata><div><span><a href=/authors/albert-bou/>Albert Bou</a></span>, <span class=author-highlighted><a href=/authors/admin/>Matteo Bettini</a></span>, <span><a href=/authors/sebastian-dittert/>Sebastian Dittert</a></span>, <span><a href=/authors/vikash-kumar/>Vikash Kumar</a></span>, <span><a href=/authors/shagun-sodhani/>Shagun Sodhani</a></span>, <span><a href=/authors/xiaomeng-yang/>Xiaomeng Yang</a></span>, <span><a href=/authors/gianni-de-fabritiis/>Gianni De Fabritiis</a></span>, <span><a href=/authors/vincent-moens/>Vincent Moens</a></span></div><span class=article-date>2024</span>
<span class=middot-divider></span>
<span class=pub-publication>In <em>International Conference on Learning Representations (ICLR)</em> - <strong><em>Spotlight (top 5%)</em></strong></span></div></div><div class=btn-links><a class="btn btn-outline-primary btn-page-header btn-sm" href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/TorchRL-A-data-driven-decision-making-library-for-PyTorch.pdf target=_blank rel=noopener>PDF</a>
<a href=# class="btn btn-outline-primary btn-page-header btn-sm js-cite-modal" data-filename=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/cite.bib>Cite</a>
Expand Down
3 changes: 2 additions & 1 deletion publication-type/2/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@
<a class="btn btn-outline-primary btn-page-header btn-sm" href="https://www.youtube.com/watch?v=1tOIMgJf_VQ" target=_blank rel=noopener>Talk</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://arxiv.org/abs/2312.01472 target=_blank rel=noopener>arXiv</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=/publication/benchmarl/poster.pdf>Poster</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a></div></div><div class="col col col-sm-12 col-lg-4 col-md-5 ml-3 mb-3"><a href=/publication/benchmarl/><img src=/publication/benchmarl/compact_hu0971da7d7467813a850f11c8b39be638_3051643_300x0_resize_lanczos_3.png alt="BenchMARL: Benchmarking Multi-Agent Reinforcement Learning" loading=lazy></a></div></div></div></div></div><div class=page-footer><div class=container><footer class=site-footer><p class=powered-by>View <a href=https://github.com/matteobettini/professional_website target=_blank rel=noopener>source</a>.</p></footer></div></div><div id=modal class="modal fade" role=dialog><div class=modal-dialog><div class=modal-content><div class=modal-header><h5 class=modal-title>Cite</h5><button type=button class=close data-dismiss=modal aria-label=Close>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a>
<a class="btn btn-outline-primary btn-page-header btn-sm" href=https://neurips.cc/virtual/2024/poster/98318 target=_blank rel=noopener>NeurIPS</a></div></div><div class="col col col-sm-12 col-lg-4 col-md-5 ml-3 mb-3"><a href=/publication/benchmarl/><img src=/publication/benchmarl/compact_hu0971da7d7467813a850f11c8b39be638_3051643_300x0_resize_lanczos_3.png alt="BenchMARL: Benchmarking Multi-Agent Reinforcement Learning" loading=lazy></a></div></div></div></div></div><div class=page-footer><div class=container><footer class=site-footer><p class=powered-by>View <a href=https://github.com/matteobettini/professional_website target=_blank rel=noopener>source</a>.</p></footer></div></div><div id=modal class="modal fade" role=dialog><div class=modal-dialog><div class=modal-content><div class=modal-header><h5 class=modal-title>Cite</h5><button type=button class=close data-dismiss=modal aria-label=Close>
<span aria-hidden=true>&#215;</span></button></div><div class=modal-body><pre><code class="tex hljs"></code></pre></div><div class=modal-footer><a class="btn btn-outline-primary my-1 js-copy-cite" href=# target=_blank><i class="fas fa-copy"></i> Copy</a>
<a class="btn btn-outline-primary my-1 js-download-cite" href=# target=_blank><i class="fas fa-download"></i> Download</a><div id=modal-error></div></div></div></div></div><script src=https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js integrity="sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=" crossorigin=anonymous></script>
<script src=https://cdnjs.cloudflare.com/ajax/libs/instant.page/5.1.0/instantpage.min.js integrity="sha512-1+qUtKoh9XZW7j+6LhRMAyOrgSQKenQ4mluTR+cvxXjP1Z54RxZuzstR/H9kgPXQsVB8IW7DMDFUJpzLjvhGSQ==" crossorigin=anonymous></script>
Expand Down
3 changes: 2 additions & 1 deletion publication/benchmarl/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@
<a class="btn btn-outline-primary btn-page-header" href="https://www.youtube.com/watch?v=1tOIMgJf_VQ" target=_blank rel=noopener>Talk</a>
<a class="btn btn-outline-primary btn-page-header" href=https://arxiv.org/abs/2312.01472 target=_blank rel=noopener>arXiv</a>
<a class="btn btn-outline-primary btn-page-header" href=/publication/benchmarl/poster.pdf>Poster</a>
<a class="btn btn-outline-primary btn-page-header" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a></div></div><div class="article-header container featured-image-wrapper mt-4 mb-4" style=max-width:1200px;max-height:318px><div style=position:relative><img src=/publication/benchmarl/featured_hu61ea57729c9eea7664f9ac410fbef93f_1166554_1200x0_resize_q100_lanczos.jpg alt class=featured-image>
<a class="btn btn-outline-primary btn-page-header" href=http://jmlr.org/papers/v25/23-1612.html target=_blank rel=noopener>Proceedings</a>
<a class="btn btn-outline-primary btn-page-header" href=https://neurips.cc/virtual/2024/poster/98318 target=_blank rel=noopener>NeurIPS</a></div></div><div class="article-header container featured-image-wrapper mt-4 mb-4" style=max-width:1200px;max-height:318px><div style=position:relative><img src=/publication/benchmarl/featured_hu61ea57729c9eea7664f9ac410fbef93f_1166554_1200x0_resize_q100_lanczos.jpg alt class=featured-image>
<span class=article-header-caption>BenchMARL execution diagram</span></div></div><div class=article-container><h3>Abstract</h3><p class=pub-abstract>The field of Multi-Agent Reinforcement Learning (MARL) is currently facing a reproducibility crisis. While solutions for standardized reporting have been proposed to address the issue, we still lack a benchmarking tool that enables standardization and reproducibility, while leveraging cutting-edge Reinforcement Learning (RL) implementations. In this paper, we introduce BenchMARL, the first MARL training library created to enable standardized benchmarking across different algorithms, models, and environments. BenchMARL uses TorchRL as its backend, granting it high performance and maintained state-of-the-art implementations while addressing the broad community of MARL PyTorch users. Its design enables systematic configuration and reporting, thus allowing users to create and run complex benchmarks from simple one-line inputs.</p><div class=row><div class=col-md-1></div><div class=col-md-10><div class=row><div class="col-12 col-md-3 pub-row-heading">Type</div><div class="col-12 col-md-9"><a href=/publication/#2>Journal article</a></div></div></div><div class=col-md-1></div></div><div class="d-md-none space-below"></div><div class=row><div class=col-md-1></div><div class=col-md-10><div class=row><div class="col-12 col-md-3 pub-row-heading">Publication</div><div class="col-12 col-md-9">In <em>Journal of Machine Learning Research (JMLR)</em></div></div></div><div class=col-md-1></div></div><div class="d-md-none space-below"></div><div class=space-below></div><div class=article-style></div><div class=article-tags><a class="badge badge-light" href=/tag/multi-agent-reinforcement-learning/>Multi-Agent Reinforcement Learning</a>
<a class="badge badge-light" href=/tag/software-library/>Software library</a></div><div class="media author-card content-widget-hr"><a href=https://matteobettini.github.io/><img class="avatar mr-3 avatar-circle" src=/authors/admin/avatar_hu9ce315dca9917080fc2f5a7d235ccd99_954972_270x270_fill_q100_lanczos_center.jpeg alt="Matteo Bettini"></a><div class=media-body><h5 class=card-title><a href=https://matteobettini.github.io/>Matteo Bettini</a></h5><h6 class=card-subtitle>PhD Candidate</h6><p class=card-text>Matteo&rsquo;s research is focused on studying heterogeneity and resilience in multi-agent and multi-robot systems.</p><ul class=network-icon aria-hidden=true><li><a href=/#contact><i class="fas fa-envelope"></i></a></li><li><a href="https://scholar.google.com/citations?user=hcvR_W0AAAAJ" target=_blank rel=noopener><i class="ai ai-google-scholar"></i></a></li><li><a href=https://www.semanticscholar.org/author/Matteo-Bettini/2153781474 target=_blank rel=noopener><i class="ai ai-semantic-scholar"></i></a></li><li><a href=https://github.com/matteobettini target=_blank rel=noopener><i class="fab fa-github"></i></a></li><li><a href=https://linkedin.com/in/bettinimatteo target=_blank rel=noopener><i class="fab fa-linkedin"></i></a></li><li><a href=http://www.youtube.com/@matteobettini1871 target=_blank rel=noopener><i class="fab fa-youtube"></i></a></li><li><a href=/uploads/Matteo_bettini___CV.pdf><i class="ai ai-cv"></i></a></li></ul></div></div><div class="media author-card content-widget-hr"><a href=/authors/prorok/><img class="avatar mr-3 avatar-circle" src=/authors/prorok/avatar_hu40ffd73fd812012dfdef198593d0889d_343850_270x270_fill_q100_lanczos_center.jpeg alt="Amanda Prorok"></a><div class=media-body><h5 class=card-title><a href=/authors/prorok/>Amanda Prorok</a></h5><h6 class=card-subtitle>Professor</h6><p class=card-text>Amanda&rsquo;s research focuses on multi-agent and multi-robot systems. Our mission is to find new ways of coordinating artificially intelligent agents (e.g., robots, vehicles, machines) to achieve common goals in shared physical and virtual spaces.</p><ul class=network-icon aria-hidden=true><li><a href=https://twitter.com/aprorok target=_blank rel=noopener><i class="fab fa-twitter"></i></a></li><li><a href="https://scholar.google.ch/citations?user=o7xMDgEAAAAJ&hl=en" target=_blank rel=noopener><i class="ai ai-google-scholar"></i></a></li><li><a href=https://github.com/proroklab target=_blank rel=noopener><i class="fab fa-github"></i></a></li><li><a href=https://www.linkedin.com/in/aprorok/ target=_blank rel=noopener><i class="fab fa-linkedin"></i></a></li></ul></div></div><div class="article-widget content-widget-hr"><h3>Related</h3><ul><li><a href=/publication/torchrl-a-data-driven-decision-making-library-for-pytorch/>TorchRL: A data-driven decision-making library for PyTorch</a></li><li><a href=/publication/vmas-a-vectorized-multi-agent-simulator-for-collective-robot-learning/>VMAS: A Vectorized Multi-Agent Simulator for Collective Robot Learning</a></li><li><a href=/publication/controlling-behavioral-diversity-in-multi-agent-reinforcement-learning/>Controlling Behavioral Diversity in Multi-Agent Reinforcement Learning</a></li><li><a href=/publication/heterogeneous-multi-robot-reinforcement-learning/>Heterogeneous Multi-Robot Reinforcement Learning</a></li><li><a href=/publication/heterogeneous-teams/>Heterogeneous Teams</a></li></ul></div></div></div></div><div class=page-footer><div class=container><footer class=site-footer><p class=powered-by>View <a href=https://github.com/matteobettini/professional_website target=_blank rel=noopener>source</a>.</p></footer></div></div><div id=modal class="modal fade" role=dialog><div class=modal-dialog><div class=modal-content><div class=modal-header><h5 class=modal-title>Cite</h5><button type=button class=close data-dismiss=modal aria-label=Close>
<span aria-hidden=true>&#215;</span></button></div><div class=modal-body><pre><code class="tex hljs"></code></pre></div><div class=modal-footer><a class="btn btn-outline-primary my-1 js-copy-cite" href=# target=_blank><i class="fas fa-copy"></i> Copy</a>
Expand Down
Loading

0 comments on commit 7798e52

Please sign in to comment.