From 53c60f9add9059cbc19bc0b54ff05704218f0e00 Mon Sep 17 00:00:00 2001 From: Yan Lin Date: Fri, 16 May 2025 22:20:19 +0200 Subject: [PATCH] Small adjustments --- data.yaml | 4 ++-- dist/blog/index.html | 13 ++++++++++--- dist/index.css | 18 ++++++++++++++++++ dist/index.html | 17 ++++++++++++----- dist/presentations/index.html | 5 ++++- dist/projects/index.html | 5 ++++- dist/publications/index.html | 7 +++++-- shell.nix | 14 ++++++++------ templates/base.html | 5 ++++- templates/partials/blog.html | 4 +++- watch.py | 11 +++++++---- 11 files changed, 77 insertions(+), 26 deletions(-) diff --git a/data.yaml b/data.yaml index 0bf271e..ee6c0fe 100644 --- a/data.yaml +++ b/data.yaml @@ -1,6 +1,6 @@ primaryPublications: - title: "UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation" - authors: "Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan" + authors: "Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan" tags: - "IEEE TKDE" - "2025" @@ -263,7 +263,7 @@ services: - "IEEE, ACM member" - "Secretary of IEEE (Denmark Section) Computer Society" - "Reviewer for journals including TIST, TII, and TVT" - - "Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW" + - "Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW" blogs: - title: "One Step Diffusion Models" diff --git a/dist/blog/index.html b/dist/blog/index.html index 63bc6ea..f7395b9 100644 --- a/dist/blog/index.html +++ b/dist/blog/index.html @@ -43,12 +43,16 @@
- One Step Diffusion Models | May 2025 + + One Step Diffusion Models + May 2025

Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.

- Multi-modal and Multi-function Transformers | April 2025 + + Multi-modal and Multi-function Transformers + April 2025

Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.

@@ -60,7 +64,10 @@

- Copyright © 2025. Designed and implemented by Yan Lin. + ずっと真夜中でいいのに。 + ずっと正午でいいのに。 + | + Designed and implemented by Yan Lin. | Source Code

diff --git a/dist/index.css b/dist/index.css index bd66a5f..50e7c75 100644 --- a/dist/index.css +++ b/dist/index.css @@ -244,4 +244,22 @@ footer { padding: 1rem 0; width: 100%; flex-shrink: 0; +} + +.dark-mode-text { + display: none; +} + +.light-mode-text { + display: inline; +} + +@media (prefers-color-scheme: dark) { + .dark-mode-text { + display: inline; + } + + .light-mode-text { + display: none; + } } \ No newline at end of file diff --git a/dist/index.html b/dist/index.html index 4d4bd24..5c3b430 100644 --- a/dist/index.html +++ b/dist/index.html @@ -97,7 +97,7 @@
UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation
-

Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan

+

Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan

@@ -680,7 +680,7 @@
  • Reviewer for journals including TIST, TII, and TVT
  • -
  • Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW
  • +
  • Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW
  • @@ -694,12 +694,16 @@
    - One Step Diffusion Models | May 2025 + + One Step Diffusion Models + May 2025

    Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.

    - Multi-modal and Multi-function Transformers | April 2025 + + Multi-modal and Multi-function Transformers + April 2025

    Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.

    @@ -711,7 +715,10 @@