diff --git a/data.yaml b/data.yaml index 0bf271e..ee6c0fe 100644 --- a/data.yaml +++ b/data.yaml @@ -1,6 +1,6 @@ primaryPublications: - title: "UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation" - authors: "Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan" + authors: "Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan" tags: - "IEEE TKDE" - "2025" @@ -263,7 +263,7 @@ services: - "IEEE, ACM member" - "Secretary of IEEE (Denmark Section) Computer Society" - "Reviewer for journals including TIST, TII, and TVT" - - "Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW" + - "Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW" blogs: - title: "One Step Diffusion Models" diff --git a/dist/blog/index.html b/dist/blog/index.html index 63bc6ea..f7395b9 100644 --- a/dist/blog/index.html +++ b/dist/blog/index.html @@ -43,12 +43,16 @@
- One Step Diffusion Models | May 2025 + + One Step Diffusion Models + May 2025

Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.

- Multi-modal and Multi-function Transformers | April 2025 + + Multi-modal and Multi-function Transformers + April 2025

Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.

@@ -60,7 +64,10 @@
UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation
-

Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan

+

Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan

@@ -680,7 +680,7 @@
  • Reviewer for journals including TIST, TII, and TVT
  • -
  • Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW
  • +
  • Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW
  • @@ -694,12 +694,16 @@
    - One Step Diffusion Models | May 2025 + + One Step Diffusion Models + May 2025

    Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.

    - Multi-modal and Multi-function Transformers | April 2025 + + Multi-modal and Multi-function Transformers + April 2025

    Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.

    @@ -711,7 +715,10 @@