SYNAPSE is a Medium Rare template.
Explore more great templates for Webflow, Framer + Figma.

GOSIM
Workshop
schedule

Schedule

Day 01

OPEN LLM visions, projects and INFRA

8:00 am

Registration

Registration

9:00 am

GOSIM Workshop Kickoff

GOSIM Workshop Kickoff

Tao Jiang
Michael Yuan

9:30 am

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

In today's dynamic digital age, AI infrastructure has emerged as the backbone supporting the rapid advancements in artificial intelligence. However, as we stand on the cusp of a new AI-driven era, it is the power of open-source initiatives that promises to sustain and amplify these advancements. This presentation seeks to shed light on how open-source AI infrastructure is far beyond mere toolsets and platforms—it embodies the spirit of collective progress, community-driven innovation, and scalability. The talk will walk you through real-world examples of how open-source AI infrastructure projects are shaping the very foundation of AI's future. This journey will encompass the nitty-gritty of design considerations, scalability challenges, and harmonizing diverse hardware and software environments. Moreover, a deep dive into the ethical implications of AI infrastructure design and how inclusivity in the open-source community can influence ethical AI will be explored. The emphasis on creating tools that democratize AI, ensuring accessibility, and fostering a rich ecosystem will be highlighted. As we cast our vision forward, the session will also touch upon the symbiotic relationship between Large Language Models (LLMs), General Artificial Intelligence (GAI), and the critical role of open-source infrastructure in their evolution. Come, let's architect the AI-driven future in harmony, through the pillars of open-source infrastructure.
David Bian
Bruce Zhang

10:15 am

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

Developers who already use Rust have widely expressed satisfaction with it. However, Rust remains a challenging language for new developers to learn and become fluent in. As we continue to evolve Rust over the next decade, what changes should we make to help developers adopt Rust? How can we help the Rust software ecosystem scale with those developers, and what challenges do we foresee? How do we avoid getting stuck in local maxima of language design? In this talk, I'll start out by presenting known directions from the Rust language roadmap, and then continue on into some detailed speculative explorations the Rust language could go in. I'll discuss some lessons we can learn from other languages and language design experiments, and some problems we are still exploring potential solutions for. (This talk should not be taken as the concrete direction for Rust, only as an example of the degree of innovation we may need in order to reach the future.)
Josh Triplett

11:00 am

Morning Break

Morning Break

11:15 am

Huggging Face Big Code Project

Huggging Face Big Code Project

BigCode is an open scientific collaboration focused on the development of large language models for code. By developing strong, open-access models, BigCode aims to narrow the gap with closed-source models and enhance data governance practices in AI and software engineering. Our work reflects a concerted effort in collaboration with the open-source community to strengthen transparency, responsibility, and development within the field. For instance, we released the largest collection of permissive GitHub repositories in The Stack dataset and StarCoder, a strong multilingual code model.
Loubna Ben Allal

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Human Feedback Learning: Applications And Challenges

Human Feedback Learning: Applications And Challenges

Reinforcement Learning with human feedback (RLHF) is a key technology widely adopted in training the most advanced Large Language Models (LLMs). RLHF enables large language models (LLMs) to learn and understand human objectives from human preference behaviors, thereby better adapting and handling complex issues such as human values and emotions that are challenging to model. The Fengshenbang team conducted in-depth studies on the core stages of RLHF, namely reward modeling and policy optimization. Unique experiences and methods have been developed to address the instability of reinforcement learning training, such as the large-scale nature of exploration spaces, the sparsity of rewards and the imperfection of the reward model. Despite this, many open issues in RLHF still need to be resolved, which may require more sophisticated algorithm mechanisms.
Hao Wang

2:15 pm

How to Train Expert Large Language Models for Vertical Abilities

How to Train Expert Large Language Models for Vertical Abilities

Vertical capabilities in large language models are vital for deploying impactful industry models. The Fengshenbang team is dedicated to exploring training and fine-tuning methods for various vertical skills. We will discuss techniques for tasks like coding, writing, and retrieval augmentation, highlighting the existing challenges.
Kunhao Pan

2:45 pm

A New Generation of General Distributed Computing Infrastructure Supporting Artificial Intelligence Applications

A New Generation of General Distributed Computing Infrastructure Supporting Artificial Intelligence Applications

Due to its lightweight, efficient, and secure characteristics, WASM (WebAssembly) is rapidly expanding its range of applications. Especially in the direction of artificial intelligence, the new WASI-NN standard provides AI inference capabilities for WASM applications. In practice, we found that there is extensive complementary synergy between the Ray architectural ecosystem and the WASM ecosystem. Ray is a high-performance distributed computing framework that has been widely used for machine learning training and tuning. By fully leveraging the distributed computing capabilities of the Ray framework, we can effectively address the resource constraints and parallel execution limitations of the WASM runtime environment. We developed Warrior, a WASM distributed scheduling and execution system based on Ray. This system allows developers to create monolithic applications with runtime dynamic distributed execution and offers dynamic monitoring of user code at runtime to achieve high performance, high availability, and fine-grained distributed execution for cloud applications. Additionally, ML businesses can also utilize the Warrior platform for distributed AI inference and learning, providing a lightweight, efficient, and secure architecture platform for domains like LLM
Wilson Wang

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

IBM watsonx - Open Source Based GenAI Platform enables enterprise put AI in action

IBM watsonx - Open Source Based GenAI Platform enables enterprise put AI in action

IBM's "Watsonx" represents an enterprise-level generative AI platform based on open-source technologies, aiming to empower businesses in their intelligent restructuring. IBM's journey with open-source reveals both the opportunities and risks that companies encounter when integrating such technologies. This is seen in the broader context of the evolution of AI and its main technologies. Introducing AI into businesses presents opportunities, challenges, and pivotal questions. IBM's exploration and contributions in enterprise-level AI shed light on this area. Specifically, Watsonx's components and capabilities demonstrate its strength, while its open-source ecosystem underlines IBM's commitment to community collaboration. Additionally, practical applications and case studies of generative AI underscore its relevance and transformative potential in various scenarios.
Hai-Xu Cheng

4:30 pm

LLMs App Stack Practice and Outlook

LLMs App Stack Practice and Outlook

we'll delve into the LLMs App Stack, exploring components like the RAG Pipeline, Orchestration, Agents, and the seamless integration of Hybrid solutions. We'll also cast an eye to the future, discussing the evolving outlook for the Agents Framework. To give you a clear picture, I'll present the LLMs Architecture Diagram, shedding light on its design and functionalities. And for our developer community, we'll touch upon Dify's unique value proposition, highlighting how it empowers you in this dynamic tech landscape.
Junchen Yan

5:00 pm

Gorilla: Large Language Model Connected with Massive APIs

Gorilla: Large Language Model Connected with Massive APIs

Large Language Models (LLMs) have seen an impressive wave of advances recently, with models now excelling in a variety of tasks, such as mathematical reasoning and program synthesis. However, their potential to effectively use tools via API calls remains unfulfilled. This is a challenging task even for today's state-of-the-art LLMs such as GPT-4, largely due to their inability to generate accurate input arguments and their tendency to hallucinate the wrong usage of an API call. We release Gorilla, a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls. When combined with a document retriever, Gorilla demonstrates a strong capability to adapt to test-time document changes, enabling flexible API updates and version changes. Gorilla also substantially mitigates the issue of hallucination, commonly encountered. Gorilla is an open-source project having served more than hundreds of thousand user requests and an energetic community supporting it.
Shishir Patil
Tianjun Zhang
Joseph Gonzalez

6:00 pm

Happy Hour

Happy Hour

Schedule

Day 02

OPEN AI Visions, projects and INFRa

8:00 am

Registration

Registration

9:00 am

Efficient Techniques for Super-Large AI Models Training and Deployment

Efficient Techniques for Super-Large AI Models Training and Deployment

The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. It is never ideal to train large models such as Vision Transformer, BERT, LLaMA, and GPT on a single GPU or a single machine. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture. It remains a challenge for AI researchers to implement complex distributed training solutions for their models. To solve this problem, we introduce a unified parallel training system designed to seamlessly integrate different paradigms of parallelization techniques including data parallelism, pipeline parallelism, multiple tensor parallelism, and sequence parallelism. Our system aims to support the AI community to write distributed models in the same way as how they write models normally. This allows them to focus on developing the model architecture and separates the concerns of distributed training from the development process. Our system is able to achieve 2x speedup over state-of-the-art distributed systems for GPT model training.
Tong Li

10:00 am

Morning Break

Morning Break

10:15 am

AIGC on the Mobile: Towards Ultimate Efficiency in Deep Learning Acceleration

AIGC on the Mobile: Towards Ultimate Efficiency in Deep Learning Acceleration

Mobile and embedded computing devices have become key carriers of deep learning to facilitate the widespread use of machine intelligence. However, there is a widely recognized challenge to achieve real-time DNN inference on edge devices, due to the limited computation/storage resources on such devices. Model compression of DNNs, including weight pruning and weight quantization, has been investigated to overcome this challenge. However, current work on DNN compression suffers from the limitation that accuracy and hardware performance are somewhat conflicting goals that are difficult to satisfy simultaneously. We present our recent work Compression-Compilation Codesign, to overcome this limitation towards the best possible DNN acceleration on edge devices. The neural network model is optimized in a hand-in-hand manner with compiler-level code generation, achieving the best possible hardware performance while maintaining zero accuracy loss, which is beyond the capability of prior work. We are able to achieve real-time on-device execution of a number of DNN tasks, including object detection, pose estimation, activity detection, speech recognition, just using an off-the-shelf mobile device, with up to 180 X speedup compared with prior work. Recently, for the first time, we enable large-scale language and AIGC models such as GPT and Stable Diffusion on mobile devices. Last we will introduce our recent breakthrough of superconducting logic based neural network acceleration that achieves 10^6 times energy efficiency gain compared with state-of-the-art solutions, achieving the quantum limit in computing.
Yanzhi Wang

11:15 am

Cloud Native CPU and the Heterogeneous Computing Enable AI 2.0

Cloud Native CPU and the Heterogeneous Computing Enable AI 2.0

ChatGPT has sparked a new wave of AI, the large models move from the technology breakthrough stage to the industry enablement stage. With the low-cost and available pre-trained large models, a large number of small and medium-sized data centers can fine-tune models and deploy applications for specific industry scenarios. The "tight coupling" of high-performance CPU and xPU is the king to obtain high performance for AI optimized Data center. Ysemi CPU is based on Arm v9 architect, which supports enhanced features on AI, such as SVE2, BF16 MM, INT8 MM etc. Based on Kubernetes scheduling and task management platform, Ysemi optimizes open-source software stack for mainstream AI frameworks, including TensorFlow, Pytorch, oneDNN, and OpenBLAS, on the basis of Arm ACL library. Under the support from heterogeneous computing configurations, CPU+xPU provides cost-effective solution for the AI training and inference.
LINGYUN TAN

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Deep Dive into Fine-Tune of the Large Language Model

Deep Dive into Fine-Tune of the Large Language Model

In this workshop, we will delve into how to fine-tune large models. First, we will provide a brief introduction to the concept of large models, including well-known models like GPT-4 and LLaMA, and explain why there's a need to fine-tune. Subsequently, we will introduce the basic concepts and terminologies of large models, explain the differences between Pre-training and Fine-tuning, and the concept of Transfer Learning. We will discuss in detail the steps and processes of fine-tuning, from data preparation and choosing a pre-trained model to the actual fine-tuning process. Next, we will demonstrate the actual operation of fine-tuning through a real-life case study and analyze the results and outcomes. Lastly, we will discuss some common challenges, such as overfitting, data imbalance, model selection, and resource constraints.
Bosheng Ding

2:45 pm

ChatLaw: The Practice of Large Language Models in the Legal Field

ChatLaw: The Practice of Large Language Models in the Legal Field

The rise of LLMs has brought new opportunities for the digital transformation of the legal industry. We will delve deep into how large language model technology can be applied in the legal field to address real-world user needs. First, we will provide a brief overview of the legal industry's scenarios and requirements and analyze how LLMs can create value for users. Then, we will explain the mechanisms and design philosophy behind the product, including how it's trained and optimized using a vast amount of legal textual data, and how it conducts deep reasoning on complex judicial logic. Finally, through a real product example, we will showcase and discuss the productization application of LLMs in niche domains
Bohua Chen

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Is Rust + Wasm the language of AGI

Is Rust + Wasm the language of AGI

Vivian Hu

5:00 pm

Un-Conference Q&A Roundtable

Un-Conference Q&A Roundtable

Schedule

Day 01

Next generation programming language.

8:00 am

Registration

Registration

9:00 am

GOSIM Workshop Kickoff

GOSIM Workshop Kickoff

Tao Jiang
Michael Yuan

9:30 am

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

In today's dynamic digital age, AI infrastructure has emerged as the backbone supporting the rapid advancements in artificial intelligence. However, as we stand on the cusp of a new AI-driven era, it is the power of open-source initiatives that promises to sustain and amplify these advancements. This presentation seeks to shed light on how open-source AI infrastructure is far beyond mere toolsets and platforms—it embodies the spirit of collective progress, community-driven innovation, and scalability. The talk will walk you through real-world examples of how open-source AI infrastructure projects are shaping the very foundation of AI's future. This journey will encompass the nitty-gritty of design considerations, scalability challenges, and harmonizing diverse hardware and software environments. Moreover, a deep dive into the ethical implications of AI infrastructure design and how inclusivity in the open-source community can influence ethical AI will be explored. The emphasis on creating tools that democratize AI, ensuring accessibility, and fostering a rich ecosystem will be highlighted. As we cast our vision forward, the session will also touch upon the symbiotic relationship between Large Language Models (LLMs), General Artificial Intelligence (GAI), and the critical role of open-source infrastructure in their evolution. Come, let's architect the AI-driven future in harmony, through the pillars of open-source infrastructure.
David Bian
Bruce Zhang

10:15 am

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

Developers who already use Rust have widely expressed satisfaction with it. However, Rust remains a challenging language for new developers to learn and become fluent in. As we continue to evolve Rust over the next decade, what changes should we make to help developers adopt Rust? How can we help the Rust software ecosystem scale with those developers, and what challenges do we foresee? How do we avoid getting stuck in local maxima of language design? In this talk, I'll start out by presenting known directions from the Rust language roadmap, and then continue on into some detailed speculative explorations the Rust language could go in. I'll discuss some lessons we can learn from other languages and language design experiments, and some problems we are still exploring potential solutions for. (This talk should not be taken as the concrete direction for Rust, only as an example of the degree of innovation we may need in order to reach the future.)
Josh Triplett

11:00 am

Morning Break

Morning Break

11:15 am

Mobile Rust in Open Harmony

Mobile Rust in Open Harmony

Huawei advocates and promotes mobile terminal operating systems' openness, concurrency, and security. Combining the lightweight characteristics of mobile devices' resources, we provide the industry with a better Rust based parallel concurrency solution.
Yan Zhou

11:30 am

Robius: A Vision for Multi-Platform App Development in Rust

Robius: A Vision for Multi-Platform App Development in Rust

Developers frequently ask how they can create multi-platform applications in Rust, hoping to leverage Rust’s safety, performance, and robust ecosystem for a painless app dev experience. However, while Rust excels in systems programming environments, its app dev ecosystem is fragmented and not yet well-established or mature, particularly on mobile platforms. This talk will highlight our vision to establish an open-source community and drive collaboration among major contributors in the space of cross-platform Rust app frameworks. Specifically, we will dive into the motivation and goals of this vision, its technical requirements, and our implementation plans for a full-system reference design. Ultimately, we aim to realize a turnkey solution that enables developers of all experience levels to easily create robust, modern Rust apps on any platform.
Kevin Boos

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Faster Rust code: lessons from seven years of speeding up the Rust compiler

Faster Rust code: lessons from seven years of speeding up the Rust compiler

This talk summarizes what I have learned about high performance Rust code from my work on the Rust compiler. It includes many tips and tricks on making Rust code faster, details on my profiling process, and information about how to track performance so that you can steadily improve a program's performance.
Nicholas Nethercote

2:45 pm

Contributing to the Rust Compiler from Start to…

Contributing to the Rust Compiler from Start to…

So, you want to contribute to the Rust compiler? Maybe you want to fix a bug, or improve some error message. Getting started can be tough: there are plenty of resources and information available to help learn about contributing to the Rust compiler, but sifting through that can be challenging. In this talk, I’ll walk through making a contribution to the Rust compiler. I’ll go through the basic steps of getting your development set up, including cloning the compiler and setting up an IDE. I’ll walk through the steps to take for identifying a candidate issue to solve, work through identifying the underlying cause and deciding how to solve it, and adding tests to ensure it doesn’t regress. Finally, I’ll demonstrate a typical pull request review process.
Jack Huey

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Macros 2.0: Status and Issues

Macros 2.0: Status and Issues

"Macros 2.0" is a new macro system that exists in a half-implemented state in rustc for years, but is not yet a part of the language officially. In the talk we'll try to list design and implementation issues that prevent the new macro system from entering the language, and their possible solutions.
Vadim Petrochenkov

5:00 pm

The Path to a Stable ABI for Rust

The Path to a Stable ABI for Rust

The Rust programming language is well known for its API stability guarantees: code written for Rust 1.0 in 2015 still compiles with the latest compilers. However Rust has never had a stable ABI, which would enable Rust programs to use Rust libraries compiled with a different compiler version. The availability of a stable ABI is essential to allow Rust programs and libraries to be distributed in compiled form. Rust was not designed with a stable ABI as a primary goal, which is why this is still an unsolved problem almost a decade after the release of Rust 1.0. This talk will dive into the reasons why Rust’s design makes a stable ABI tricky, and explore how these problems can be addressed.
Amanieu d'Antras

6:00 pm

Happy Hour

Happy Hour

Schedule

Day 02

Next generation programming language.

8:00 am

Registration

Registration

9:00 am

Rust Learning Resources

Rust Learning Resources

Bart Massey

10:00 am

Morning Break

Morning Break

10:15 am

Teaching and Learning in the Rust OS Open-Source Operating System Training Camp

Teaching and Learning in the Rust OS Open-Source Operating System Training Camp

Starting from 2020, Tsinghua University began experimenting with having students use the Rust language to write operating systems and launched the Rust OS open-source training camp teaching for universities nationwide. Over the past 4 years, they have developed a series of courses, including Rust language programming, RISC-V architecture, major OS experiments, ArceOS componentized OS, and Hypervisor virtualization topics. These efforts have accumulated experience in training talents on a large scale in China to write operating systems using Rust.
Ming Li

11:15 am

Oxidizing Education

Oxidizing Education

For Rust to really take off, we have a cycle to break: companies need Rust developers to start doing Rust, and developers need Rust jobs to start learning Rust. To break the cycle, we can oxidize education: enable universities to start teaching Rust. This talk is about how we can do just that.
Henk Oordt

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Fighting The Heap War with Rust

Fighting The Heap War with Rust

Ownership is a critical mechanism that differs Rust from other zero-cost system programming languages with little automated heap management. However, the current design of Rust is yet to be perfect and still suffers vulnerabilities, especially those incurred by unsafe code. In this talk, I will present our recent research results towards mitigating the residual bugs of Rust programs related to heap management, including dangling pointer dereference, memory leakage, and memory exhaustion handling.
Hui Xu

2:45 pm

Fuzzing Rust Library Interactions via Ecosystem-Guided Target Generation

Fuzzing Rust Library Interactions via Ecosystem-Guided Target Generation

Rust is popular in software development for its memory safety and maturing ecosystems. Quality enhancement of Rust libraries, core components in software, is paramount. Existing methods, however, struggle with testing Rust API interactions due to Rust's unique ownership constraints and challenges in navigating vast API function dependencies. To address these, we introduce a fuzzing technique tool, which efficiently generates intricate API interactions, aiming to improve Rust library quality. This technique employs a weighted API dependency graph that captures function relationships and common usage patterns, narrowing the search space and emphasizing prevalent application scenarios. Additionally, an ownership assurance algorithm ensures the generated Rust programs are valid, enhancing the success rate of compiling fuzz targets. The approach is resource-efficient in producing superior fuzz targets and effectively finds impactful errors in real development. So far the tool has identified 130 bugs, including 84 previously unknown bugs, in 20 well-known latest versions of Rust libraries, of which 54 have been confirmed.
Yang Feng

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

From Git Index to Sparse Index

From Git Index to Sparse Index

In this talk, I will cover how Cargo publishes and downloads dependencies on crates.io. I will also dive into the reasons for introducing Sparse indexing.
Rustin Liu

4:30 pm

Scaling Git To Infinite Monorepo By Refactoring Internal Objects In Rust

Scaling Git To Infinite Monorepo By Refactoring Internal Objects In Rust

Git's internal filesystem implementation limits performance and scalability for massive repositories. Refactoring internal objects in Rust and adapting to a database can utilize distributed databases to store millions of files, integrate Large File Support and store binary files with distributed storage. The database-backed design enables integration with indexing and search services to provide an efficient API for searching massive codebases.
Quanyi Ma,

5:00 pm

Un-Conference Q&A Roundtable

Un-Conference Q&A Roundtable

Schedule

Day 01

High performance, cross platform, app and web development

8:00 am

Registration

Registration

9:00 am

GOSIM Workshop Kickoff

GOSIM Workshop Kickoff

Tao Jiang
Michael Yuan

9:30 am

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

In today's dynamic digital age, AI infrastructure has emerged as the backbone supporting the rapid advancements in artificial intelligence. However, as we stand on the cusp of a new AI-driven era, it is the power of open-source initiatives that promises to sustain and amplify these advancements. This presentation seeks to shed light on how open-source AI infrastructure is far beyond mere toolsets and platforms—it embodies the spirit of collective progress, community-driven innovation, and scalability. The talk will walk you through real-world examples of how open-source AI infrastructure projects are shaping the very foundation of AI's future. This journey will encompass the nitty-gritty of design considerations, scalability challenges, and harmonizing diverse hardware and software environments. Moreover, a deep dive into the ethical implications of AI infrastructure design and how inclusivity in the open-source community can influence ethical AI will be explored. The emphasis on creating tools that democratize AI, ensuring accessibility, and fostering a rich ecosystem will be highlighted. As we cast our vision forward, the session will also touch upon the symbiotic relationship between Large Language Models (LLMs), General Artificial Intelligence (GAI), and the critical role of open-source infrastructure in their evolution. Come, let's architect the AI-driven future in harmony, through the pillars of open-source infrastructure.
David Bian
Bruce Zhang

10:15 am

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

Developers who already use Rust have widely expressed satisfaction with it. However, Rust remains a challenging language for new developers to learn and become fluent in. As we continue to evolve Rust over the next decade, what changes should we make to help developers adopt Rust? How can we help the Rust software ecosystem scale with those developers, and what challenges do we foresee? How do we avoid getting stuck in local maxima of language design? In this talk, I'll start out by presenting known directions from the Rust language roadmap, and then continue on into some detailed speculative explorations the Rust language could go in. I'll discuss some lessons we can learn from other languages and language design experiments, and some problems we are still exploring potential solutions for. (This talk should not be taken as the concrete direction for Rust, only as an example of the degree of innovation we may need in order to reach the future.)
Josh Triplett

11:00 am

Morning Break

Morning Break

11:15 am

Mobile Rust in Open Harmony

Mobile Rust in Open Harmony

Huawei advocates and promotes mobile terminal operating systems' openness, concurrency, and security. Combining the lightweight characteristics of mobile devices' resources, we provide the industry with a better Rust based parallel concurrency solution.
Yan Zhou

11:30 am

Robius: A Vision for Multi-Platform App Development in Rust

Robius: A Vision for Multi-Platform App Development in Rust

Developers frequently ask how they can create multi-platform applications in Rust, hoping to leverage Rust’s safety, performance, and robust ecosystem for a painless app dev experience. However, while Rust excels in systems programming environments, its app dev ecosystem is fragmented and not yet well-established or mature, particularly on mobile platforms. This talk will highlight our vision to establish an open-source community and drive collaboration among major contributors in the space of cross-platform Rust app frameworks. Specifically, we will dive into the motivation and goals of this vision, its technical requirements, and our implementation plans for a full-system reference design. Ultimately, we aim to realize a turnkey solution that enables developers of all experience levels to easily create robust, modern Rust apps on any platform.
Kevin Boos

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Live App Building Using Makepad: Hands-on Coding

Live App Building Using Makepad: Hands-on Coding

A hands-on coding experience where participants will familiarize themselves with Makepad Studio. This includes an introduction to the Code Editor, Visual Designer, and DSL. The session will showcase features like Live Reloading, Cross-platform Building, and various example applications. As we move deeper, attendees will engage in live coding, witnessing the creation of a new app from scratch and its live building & deployment process.
Rik Arends

2:45 pm

Makepad In-Depth: Makepad Architecture and Design Internals

Makepad In-Depth: Makepad Architecture and Design Internals

Delving into 'Makepad In-Depth', we will explore the architecture and design internals of Makepad. This will encompass a comprehensive walkthrough of Makepad Widgets, focusing on their layout and the dynamic interplay of states & animation. With 'Makepad Draw', attendees will experience drawing with Cx2d and an app walkthrough. The segment concludes with an immersion into the Makepad Platform, touching upon its Live System, the magic of Shaders, and the nuances of Rendering.
Rik Arends

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Makepad Performance Benchmarking

Makepad Performance Benchmarking

How do apps created using the Makepad Framework perform against similar apps written native for Android? In this workshop, we present the results of performance benchmarking data for these apps written using Makepad, as compared to their Android native counterparts. We will discuss how we measured their performances and what are the strengths of the Makepad platform.
Edward Tan

5:00 pm

How to Build Your own Makepad Widget from Scratch

How to Build Your own Makepad Widget from Scratch

Delve into the vast capabilities of Makepad by constructing an interactive "image carousel" widget from scratch. Start your journey by understanding the steps to register a new widget, paired with the essence of initializing it with minimal Rust code. As you progress, learn to leverage the power of existing widgets, and get hands-on experience in manipulating UI components directly through Rust. The adventure doesn't stop there; you'll also seamlessly animate your carousel images and master the art of navigation using intuitive next/prev buttons. By the end, you'll be adept at triggering tailored actions, ensuring a holistic app engagement experience.
Jorge Bejar

6:00 pm

Happy Hour

Happy Hour

Schedule

Day 02

High performance, cross platform, app and web development

8:00 am

Registration

Registration

9:00 am

Write Once, Run Everywhere: Building Apps with Dioxus

Write Once, Run Everywhere: Building Apps with Dioxus

This is a live coding session. To get started, ensure Rust, the wasm and mobile toolchains are installed, and have the dioxus-cli set up. You should be familiar with HTML, which serves as the foundation for UI design, CSS for styling and layout, and React for encapsulation and state management. Our architectural design allows for cross-platform renderers. Key components include essential crates such as routers, loggers, state management, and CLI. We also offer exciting features like templates, hot-reloading, fullstack capabilities, and optimizations. In our live coding session, you'll witness the creation of a basic app, learn how to incorporate backend functionalities, deploy the app online, bundle it for desktop use, and simulate its mobile operation. As we move forward, we have a roadmap detailing our future plans, and we encourage contributions to enhance the libraries in our ecosystem.
Jonathan Kelley

10:00 am

Morning Break

Morning Break

10:15 am

Ylong Project

Ylong Project

Explore Rust's asynchronous mechanisms and third-party community concurrency frameworks. Delve into the mobile domain's demands for asynchronous frameworks and the shortcomings of existing frameworks on mobile. Dive deep into the ylong Hongmeng concurrency framework, with an emphasis on priority scheduling, structured concurrency, and the integration of IO & CPU tasks.
Mingyu Chen

11:15 am

Immersive Application Experiences

Immersive Application Experiences

An immersive user experience demands a lot from applications beyond just UI. Integrating with native system applications to manage contacts, or files, reacting to sensor data and presenting information suitable for the target device, or adapting to the native notification system. In this session we will identify the distinguishing properties of prominent applications, look at how their frameworks provide access to system APIs beyond the UI, and how the Rust ecosystem can learn from them. Eventually we want to discuss how the Osiris Project tries to provide a framework to expose system APIs to Rust application developers, what we can learn from the web, and how you can contribute to this effort.
David Rheinsberg

12:15 pm

Lunch Break

Lunch Break

1:45 pm

You Can Work on the Web Patform!

You Can Work on the Web Patform!

Have you ever wanted to work on a web browser? Servo is an experimental web engine written in Rust. Its small code base and friendly community mean that it is an ideal project for those looking to dip their toes into the world of web browser engineering. This talk will cover the basics of building and running Servo on your own computer. In addition, we'll take a tour of Servo's main subsystems and see what kind of work goes into building them. Additionally, we'll cover a variety of types of contributions to Servo, adapted to different kinds of experience and specialization. By the end you should have the tools you need to explore contributing yourself.
Martin Robinson

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Taffy: Bringing a Dead Dependency Back to Life Through Open Source

Taffy: Bringing a Dead Dependency Back to Life Through Open Source

Bevy had a problem: the UI layout library we use had been abandoned for years after the company that made it was bought out, and it was full of critical bugs. This talk explains how we forked, fixed and sustained a complex library thanks to open source.
Alice Cecile

5:00 pm

Un-Conference Q&A Roundtable

Un-Conference Q&A Roundtable

Schedule

Day 01

Software defined vehicle, autonomous driving and robotics

8:00 am

Registration

Registration

9:00 am

GOSIM Workshop Kickoff

GOSIM Workshop Kickoff

Tao Jiang
Michael Yuan

9:30 am

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

In today's dynamic digital age, AI infrastructure has emerged as the backbone supporting the rapid advancements in artificial intelligence. However, as we stand on the cusp of a new AI-driven era, it is the power of open-source initiatives that promises to sustain and amplify these advancements. This presentation seeks to shed light on how open-source AI infrastructure is far beyond mere toolsets and platforms—it embodies the spirit of collective progress, community-driven innovation, and scalability. The talk will walk you through real-world examples of how open-source AI infrastructure projects are shaping the very foundation of AI's future. This journey will encompass the nitty-gritty of design considerations, scalability challenges, and harmonizing diverse hardware and software environments. Moreover, a deep dive into the ethical implications of AI infrastructure design and how inclusivity in the open-source community can influence ethical AI will be explored. The emphasis on creating tools that democratize AI, ensuring accessibility, and fostering a rich ecosystem will be highlighted. As we cast our vision forward, the session will also touch upon the symbiotic relationship between Large Language Models (LLMs), General Artificial Intelligence (GAI), and the critical role of open-source infrastructure in their evolution. Come, let's architect the AI-driven future in harmony, through the pillars of open-source infrastructure.
David Bian
Bruce Zhang

10:15 am

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

Developers who already use Rust have widely expressed satisfaction with it. However, Rust remains a challenging language for new developers to learn and become fluent in. As we continue to evolve Rust over the next decade, what changes should we make to help developers adopt Rust? How can we help the Rust software ecosystem scale with those developers, and what challenges do we foresee? How do we avoid getting stuck in local maxima of language design? In this talk, I'll start out by presenting known directions from the Rust language roadmap, and then continue on into some detailed speculative explorations the Rust language could go in. I'll discuss some lessons we can learn from other languages and language design experiments, and some problems we are still exploring potential solutions for. (This talk should not be taken as the concrete direction for Rust, only as an example of the degree of innovation we may need in order to reach the future.)
Josh Triplett

11:00 am

Morning Break

Morning Break

11:15 am

The Convergence of Robotics and Cloud-Native: Accelerating the Development of Intelligent Robot Applications

The Convergence of Robotics and Cloud-Native: Accelerating the Development of Intelligent Robot Applications

The fusion of robotics technology and cloud-native technology has garnered extensive attention in recent years. With the continuous growth of the robotics market and the trend towards intelligentization in the robotics industry, cloud-native robotics has become an important means to achieve robot intelligence and accelerate the development of robot applications. This topic will venture into the challenges faced by the robotics industry and the challenges of ROS in achieving intelligence, highlighting the pain points of the current robotics industry. Subsequently, the focus will be on the introduction of a cloud-native robot system based on KubeEdge, exploring its application in cloud-native robot systems and how it provides support for constructing intelligent robot applications. Finally, this topic will illustrate the logic of using cloud-native robot technology in practical scenarios, using the example of developing a cloud-native robot park inspection application with edge-cloud collaboration
JieZhang Wang

11:45 am

Real-time Safety Scheduling for Intelligent Driving Functions

Real-time Safety Scheduling for Intelligent Driving Functions

Discussing the architecture, design, and implementation of real-time scheduling functions for L2-L4 intelligent driving tasks based on intelligent driving domain controllers, intelligent driving operating systems, and real-time kernels. Analyzing the problems encountered in the implementation of the scheduling function from the perspectives of sensors, kernels, IPC communication, and task execution. Sharing the issues faced by AICC (Guoqi Smart Control) during the implementation of deterministic scheduling and its practical mass production practices.
Hada Bao

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Bridging Typed Messages Between ROS2 and DORA

Bridging Typed Messages Between ROS2 and DORA

The Dora-ROS2 bridge makes it possible to create Dora nodes that are able to send and receive typed ROS2 messages. The bridge does not depend on the ROS2 libraries or build system. Instead, it communicates directly through DDS and parses the ROS2 `msg` files for type information. This talk focuses on some implementation details, including: - the automatic generation of Rust structs for ROS2 types at compile time - the dynamic serialization and deserialization of messages guided by the ROS2 type information
Philipp Oppermann

2:15 pm

Rust-Python FFI & Multi-language System Debugging

Rust-Python FFI & Multi-language System Debugging

Overcoming python FFI Challenges. Dealing with the GIL, python version linking, passing data, and parallelism; and also dealing with data, tracing, metrics, logs, errors in Multilanguage system
Xavier Tao

2:45 pm

Transparent zero-copy IPC in Dora

Transparent zero-copy IPC in Dora

To reduce the overhead of sending large messages, Dora uses shared memory for passing data between local nodes. In this talk, we explore the design of Dora's node API, which enables safe and transparent data sharing with automated cleanup across Rust and Python nodes.
Philipp Oppermann

3:15 pm

Dora-Drives: Autonomous Driving Made Simple

Dora-Drives: Autonomous Driving Made Simple

A step-by-step tutorial that allows beginners to write their own autonomous vehicle program from scratch using a simple starter kit. Dora-drives makes learning autonomous driving faster and easier
Xavier Tao

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Theseus: a Rust-native OS for Safety and Reliability

Theseus: a Rust-native OS for Safety and Reliability

Theseus is a novel operating system written from scratch entirely in Rust, with the goal of safety and reliability above all else. In this talk, we’ll describe Theseus’s unique OS structure of many tiny components, and look at how our key principle of intralingual design enables Theseus to shift typical OS responsibilities like resource management into the compiler. We’ll also dive into a few specific examples of how Theseus leverages Rust language and compiler features in the implementation of its core subsystems, as well as our ongoing efforts to formally verify the correctness of these subsystems. These design choices offer safety and efficiency benefits that make Theseus well suited for use in embedded and automotive contexts.
Kevin Boos

5:00 pm

Hypervisor and RTOS for Automotive platform

Hypervisor and RTOS for Automotive platform

RTOS is a multitasking operating system intended for real-time applications. The scheduler is the key part to achieve multitasking. For RTOS, the scheduler is designed to provide a predictable execution pattern. In an Automotive system, predictable execution is necessary because certain events must be entertained in a strictly defined time. In the Automotive market, several major factors are accelerating hardware consolidation onto single SoC designs. More strategic is the evolutionary path of ECU consolidation – leading to domain-driven vehicle architecture and ultimately, to the in-vehicle high-performance computing (HPC) platforms. The road to onboard HPCs presents a complicated journey for OEMs. While it will yield a decrease in the number of hardware components, thermal dissipation, and supply-chain dependence — as well as savings in development, testing, and toolchains — it means the software needed on the SoC becomes significantly more complex. The SoC needs to perform all the capabilities of the systems consolidated onto it, just as if they were separate systems. To do that, we need to run each component’s discrete OS simultaneously, with each one safely isolated on its own virtual machine. This is the role of the hypervisor, running on top of an embedded RTOS.
Wei Chen

5:30 pm

Introduce Hardware-Level Device Isolation to RTOS

Introduce Hardware-Level Device Isolation to RTOS

Most architectures in RTOSes use MMU/MPU to isolate the thread memory regions so that the system is protected from buggy or malicious code. However, MMU/MPU can only limit memory accesses from CPUs. Memory accesses such as those from DMA are not protected by MMU/MPU, which may cause critical security issues. This issue should be brought to attention because mainstream RTOSes like Zephyr have been adding more DMA devices to the code, while many DMA devices might be buggy or even malicious. Therefore, without taking actions, RTOSes with multiple DMA drivers would be under increasing security risk. RichOSes use IOMMU/SMMU to protect the device memory accesses in general, and likewise, RTOSes can mitigate the above-mentioned security issue by introducing the IOMMU/SMMU technology. Additionally, the introduction of IOMMU/SMMU makes RTOSes possible to support more PCI and DMA devices or even features such as virtualization. Because of the variety of hardware-level solutions provided by different architectures, it is necessary to add a new IOMMU/SMMU Subsys framework for RTOSes so it can be easily extended in the future. This talk will cover the Zephyr Arm SMMUv3 support based on the Subsys framework. An implementation example will be presented to showcase using SMMUv3 to protect memory access from a PCI AHCI device on the Arm FVP platform.
Jaxson Han

6:00 pm

Happy Hour

Happy Hour

Schedule

Day 02

Software defined vehicle, autonomous driving and robotics

8:00 am

Registration

Registration

9:00 am

SPEAR: A Simulator for Photorealistic Embodied AI Research

SPEAR: A Simulator for Photorealistic Embodied AI Research

Interactive simulators are becoming powerful tools for training embodied AI agents, but existing simulators suffer from limited content diversity, physical interactivity, and visual fidelity. In this talk, I will discuss the ongoing efforts in my research group to address these limitations. More specifically, I will discuss a new open-source simulator we are developing called SPEAR - A Simulator for Photorealistic Embodied AI Research. To develop SPEAR, we worked extensively with team of professional artists to construct several highly realistic virtual indoor environments with thousands of unique objects that can be manipulated individually. Our environments are implemented as Unreal Engine assets, and we provide a high-level Python interface for interacting with robots in our environments. Throughout the talk, I will provide an overview of the current functionality in SPEAR, show example applications, and discuss the roadmap for our upcoming v0.4.0 release and beyond. SPEAR is available online at https://github.com/isl-org/spear.
Mike Roberts

10:00 am

Morning Break

Morning Break

10:15 am

For Autonomous Driving: Evidence of Symbolic Emergence in Neural Networks, and Its Model Checking and Data Cleaning

For Autonomous Driving: Evidence of Symbolic Emergence in Neural Networks, and Its Model Checking and Data Cleaning

As the demand for the safety of autonomous driving grows, the use of explainability techniques to enhance the safety of neural network systems has become indispensable in modern autonomous driving applications. However, in the field of explainability, some core fundamental issues related to this haven't been properly modeled. For instance, the phenomenon of concept emergence in neural networks has not been mathematically proven. The core mechanisms determining the prediction errors of neural networks cannot be clearly translated into symbolic concepts, hindering the application of safety verification based on explainability. In this talk, I will introduce numerous studies from my team's recent theory system of explainability game interaction, including progress and research plans on mathematically proving the concept emergence phenomenon in neural networks. Based on this research, the presentation will demonstrate how to use explainability techniques to provide a provable, quantifiable, verifiable, and accountable symbolic explanation for autonomous driving systems. Ensuring rigor in the interpretation results, it offers new possibilities for formal verification and data evaluation in future deep learning systems.
Quanshi Zhang

11:15 am

Software Engineering Issuesin Cyber-Physical Systems

Software Engineering Issuesin Cyber-Physical Systems

Cyber-Physical Systems(CPS) are theresults of the inevitable merge of computers with other domains of physical-world practices.A key challenge to CPS is software engineering:how to coordinate experts of disparate backgrounds to build dependable, maintainable,and efficient complex CPS.This challenge is exacerbated as each cyber/physical domain technology advances,resulting in combinatorial growth of the cross-domain interaction/interference complexity. The CPS software engineering challenge implies a huge problems space involving multiple dimensions. In this talk, we aim to introduce this problems space to a broad audience, to reveal its abundant academic and practical values, and to inspire new solutions. Specifically, in this talk, we will explore the CPS software engineering problems space with several concrete examples. Through these examples, we will demonstrate how the CPS software engineering problems differ from traditional software engineering problems, how the problems are formulated, and how cross-domain thinking can help address these problems.
Qixin Wang

12:15 pm

Lunch Break

Lunch Break

1:45 pm

LimSim: A Long-duration, Interactive Multi-scenario Traffic Simulator

LimSim: A Long-duration, Interactive Multi-scenario Traffic Simulator

Traffic simulators are pivotal for validating autonomous driving technologies, enhancing testing efficiency and accuracy, and streamlining the R&D cycle. LimSim, developed by the Intelligent Traffic (Platform) team at the Shanghai Artificial Intelligence Lab, has had its research paper accepted by ITSC 2023, the most prestigious and influential conference in the intelligent transportation sector. Compared to existing simulators, LimSim offers broader functionality and superior performance. It can efficiently simulate city-level traffic networks across diverse scenarios, accurately depicting granular multi-vehicle dynamic interactions. Furthermore, LimSim boasts a lightweight visualization module and cross-platform integration tools.
Pinlong Cai

2:45 pm

Simulation and Data Closed-loop Toolchain for Autonomous Driving

Simulation and Data Closed-loop Toolchain for Autonomous Driving

This presentation will introduce how to design effective testing and validation methods for complex artificial intelligence systems like autonomous driving. We will discuss the role of simulation in the training, testing, and validation of AI systems, how simulation bridges the current data-driven research and development processes with traditional functional safety activities. Lastly, we will explore the role of the open-source ecosystem in advancing the implementation of autonomous driving technologies.
Yuxi Pan

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Dynamic Scenario Construction in Carla: Starting with Leaderboard

Dynamic Scenario Construction in Carla: Starting with Leaderboard

This presentation will combine the development experience of Carla Leaderboard to demonstrate how to use Carla and its related toolchain to construct dynamic testing scenarios. It will also discuss how to utilize existing scenes to design test routes in different regions and styles. Furthermore, it will delve into the design philosophy of Carla Leaderboard competitions and explore the gap between competition and industrial practice.
Songyang Yan

5:00 pm

Autonomous Driving Controller Hardware in Loop Testing with Carla

Autonomous Driving Controller Hardware in Loop Testing with Carla

Autonomous driving technology has developed rapidly, and autonomous vehicles are gradually entering the stage of industrialization. In order to accelerate the development of autonomous driving and realize the safety vision of zero accidents, it is particularly important to use Hardware in Loop (HIL) testing to evaluate the ability of autonomous vehicles. The HIL test injects the scene generated by the Carla environment into the real autonomous driving controller to be tested. Massive test cases and corner cases could be validated in the virtual simulation test.
Chen Gao

5:30 pm

The Demonstration and Verification of the Vehicle-Road Collaboration System in Carla Simulation.

The Demonstration and Verification of the Vehicle-Road Collaboration System in Carla Simulation.

In the field of intelligent transportation, the vehicle-road collaboration system has become a research focus. The goal is to achieve a more efficient, safer, and environmentally friendly traffic experience through close cooperation between vehicles and road infrastructure. However, the actual deployment of the vehicle-road system is filled with complexities and challenges. By establishing a platform based on the Carla simulation and incorporating algorithm verification, costs, risks, and time investment can be significantly reduced, enhancing system quality and efficiency. The focus of this demonstration is to show how to build a virtual campus scenario based on the Carla simulation and use this scenario to pre-verify sensor layouts for actual deployment. Additionally, we will demonstrate how to build datasets on this platform and conduct algorithm tests, providing strong support for the research, development, and implementation of the vehicle-road system.
Guoliang You
Haojie Ren
Schedule

Day 01

WebXR, game engine and AIGC

8:00 am

Registration

Registration

9:00 am

GOSIM Workshop Kickoff

GOSIM Workshop Kickoff

Tao Jiang
Michael Yuan

9:30 am

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

GOSIM Keynotes - Building the Backbone: AI Infrastructure in an Open-Source Era

In today's dynamic digital age, AI infrastructure has emerged as the backbone supporting the rapid advancements in artificial intelligence. However, as we stand on the cusp of a new AI-driven era, it is the power of open-source initiatives that promises to sustain and amplify these advancements. This presentation seeks to shed light on how open-source AI infrastructure is far beyond mere toolsets and platforms—it embodies the spirit of collective progress, community-driven innovation, and scalability. The talk will walk you through real-world examples of how open-source AI infrastructure projects are shaping the very foundation of AI's future. This journey will encompass the nitty-gritty of design considerations, scalability challenges, and harmonizing diverse hardware and software environments. Moreover, a deep dive into the ethical implications of AI infrastructure design and how inclusivity in the open-source community can influence ethical AI will be explored. The emphasis on creating tools that democratize AI, ensuring accessibility, and fostering a rich ecosystem will be highlighted. As we cast our vision forward, the session will also touch upon the symbiotic relationship between Large Language Models (LLMs), General Artificial Intelligence (GAI), and the critical role of open-source infrastructure in their evolution. Come, let's architect the AI-driven future in harmony, through the pillars of open-source infrastructure.
David Bian
Bruce Zhang

10:15 am

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

GOSIM Keynotes - Evolving Rust to the Next Ten Million Developers

Developers who already use Rust have widely expressed satisfaction with it. However, Rust remains a challenging language for new developers to learn and become fluent in. As we continue to evolve Rust over the next decade, what changes should we make to help developers adopt Rust? How can we help the Rust software ecosystem scale with those developers, and what challenges do we foresee? How do we avoid getting stuck in local maxima of language design? In this talk, I'll start out by presenting known directions from the Rust language roadmap, and then continue on into some detailed speculative explorations the Rust language could go in. I'll discuss some lessons we can learn from other languages and language design experiments, and some problems we are still exploring potential solutions for. (This talk should not be taken as the concrete direction for Rust, only as an example of the degree of innovation we may need in order to reach the future.)
Josh Triplett

11:00 am

Morning Break

Morning Break

11:15 am

Mobile Rust in Open Harmony

Mobile Rust in Open Harmony

Huawei advocates and promotes mobile terminal operating systems' openness, concurrency, and security. Combining the lightweight characteristics of mobile devices' resources, we provide the industry with a better Rust based parallel concurrency solution.
Yan Zhou

11:30 am

Robius: A Vision for Multi-Platform App Development in Rust

Robius: A Vision for Multi-Platform App Development in Rust

Developers frequently ask how they can create multi-platform applications in Rust, hoping to leverage Rust’s safety, performance, and robust ecosystem for a painless app dev experience. However, while Rust excels in systems programming environments, its app dev ecosystem is fragmented and not yet well-established or mature, particularly on mobile platforms. This talk will highlight our vision to establish an open-source community and drive collaboration among major contributors in the space of cross-platform Rust app frameworks. Specifically, we will dive into the motivation and goals of this vision, its technical requirements, and our implementation plans for a full-system reference design. Ultimately, we aim to realize a turnkey solution that enables developers of all experience levels to easily create robust, modern Rust apps on any platform.
Kevin Boos

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Wolvic: Web Browsing on Extended Reality

Wolvic: Web Browsing on Extended Reality

Web browsing on Extended Reality enables users to navigate and interact with traditional Web-based content, as well as immersive WebXR experiences. In this presentation, we will draw from our experience creating Wolvic, an open-source Web browser for XR devices. We will begin with an overview of current Web-based solutions for different use cases, from video and gaming to social collaboration and productivity. We will discuss the technical challenges and design opportunities of creating a Web browser for XR devices. Finally, we will look at the complexities of developing a multiplatform and open-source XR application.
Felipe Erias

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

AI Assisted 3D Content Generation and High-Performance Eendering

AI Assisted 3D Content Generation and High-Performance Eendering

With the development of virtual reality and the metaverse, the creation of three-dimensional digital content has entered the fast lane. In particular, the metaverse is expected to allow each ordinary user to produce and edit three-dimensional digital content. However, current three-dimensional digital content creation heavily relies on complex softwares, expensive hardware facilities and professional operators, which poses a high barrier to ordinary users. Existing rendering platforms also find it difficult to support real-time global illumination for complex three-dimensional scenes generated by ordinary users. This talk will share our research achievements in the intelligent creation of three-dimensional digital content, especially in the direction of intelligent material modeling and lighting estimation for ordinary users, as well as the next-generation high-performance rendering framework based on intelligent frame interpolation. These research results greatly reduce the cost of three-dimensional digital content creation, and also improve the rendering performance of large-scale three-dimensional scenes, providing technical support for the sustained development of new digital economy industries such as virtual reality and the metaverse in the future.
Jie Guo

5:00 pm

Building Experiences for XR Devices with the Magic of OpenXR

Building Experiences for XR Devices with the Magic of OpenXR

OpenXR is a royalty-free, open standard that provides high-performance access to Augmented Reality (AR) and Virtual Reality (VR)—collectively known as XR—platforms and devices. The presentation will introduce OpenXR essential concepts in the API, and show the magic of OpenXR by some using cases. Finally, you will get a short tutorial of how to build your OpenXR experience on the Android Platform.
Baolin Fu

6:00 pm

Happy Hour

Happy Hour

Schedule

Day 02

WebXR, game engine and AIGC

8:00 am

Registration

Registration

9:00 am

Rapier: First Steps in Distributed Physics Simulation

Rapier: First Steps in Distributed Physics Simulation

Rapier is a powerful open-source physics engine designed for Rust. With its advanced 2D and 3D physics simulations, it's an excellent choice for game development, robotics, and animation projects. Its cross-platform determinism and compatibility with Bevy, WASM, and JS make it an ideal solution for desktop, web and multiplayer gaming. Our exploration today will focus on distributed physics simulation for the metaverse. We will discuss design and strategies to overcome the challenging numerical problems that may arise.
Sébastien Crozet

10:00 am

Morning Break

Morning Break

10:15 am

Bevy: Taking the Entity-Component-System Architecture Seriously

Bevy: Taking the Entity-Component-System Architecture Seriously

The Entity-Component-System (ECS) architecture is commonly used in games to speed up embarrassingly parallel, compute-heavy tasks. But if we take ergonomics (and extensions) seriously, it becomes a powerful and expressive framework for complex logic that borrows from the best of databases, scheduling and dependency injection. Join me as I explore how Bevy, a leading Rust game engine, answers the question of "what if it was all ECS?".
Alice Cecile

11:15 am

Opportunities and Challenges of Game Engines in Cross-domain Applications

Opportunities and Challenges of Game Engines in Cross-domain Applications

Game engines, as game editing tools, have become relatively mature. Because of their real-time, interactive characteristics, and WYSIWYG (What You See Is What You Get) features, game engines are increasingly being adopted in non-gaming sectors such as film and TV special effects, digital twins, simulation drills, smart terminals, etc. However, when game engines are used across different industries, they invariably face significant adaptation challenges. Dr. Wu will discuss the problems he encountered and the solutions he found when using game engines in cross-domain applications, combined with over a decade of his cross-industry insights, to share and explore with developers
Xiaomao Wu

12:15 pm

Lunch Break

Lunch Break

1:45 pm

Creating Next Generation Multiplayer with Croquet

Creating Next Generation Multiplayer with Croquet

Croquet removes the complexity of traditional client/server systems and eliminates netcode, enabling synchronized simulations and gameplay that have simply not been possible. Your project is multiuser from the first line of code and operations costs drop dramatically. We will demonstrate the creation of advanced multiplayer applications using both the web-based Microverse World Builder and native applications using Croquet for Unity.
David Smith

3:45 pm

Afternoon Break

Afternoon Break

4:00 pm

Scalable Graphics and Rendering Pipeline in Cocos Engine

Scalable Graphics and Rendering Pipeline in Cocos Engine

Cocos Creator is the new generation of game development tools in the Cocos family. It brings a complete set of 3D and 2D features while providing game developers with an intuitive, low cost and collaboration-friendly workflow. It provides integral solutions for automotive, XR, Metaverse, and Education. This talk will explain how Cocos have achieved great scalability with its flexible material system and customizable rendering pipeline. The material system has separated lighting models and modularized surface shaders to fully customize the surfaces without breaking rendering consistency. The RenderGraph infrastructure of the rendering pipeline has ensured an awesome balance between flexibility and usability.
Jin Kun

5:00 pm

Un-Conference Q&A Roundtable

Un-Conference Q&A Roundtable

Secure your seat to 2023's top workshop.