Small adjustments

This commit is contained in:
Yan Lin 2025-05-16 22:20:19 +02:00
parent 01a9b1ac4d
commit 53c60f9add
11 changed files with 77 additions and 26 deletions

View file

@ -1,6 +1,6 @@
primaryPublications: primaryPublications:
- title: "UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation" - title: "UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation"
authors: "Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan" authors: "<strong>Yan Lin</strong>, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan"
tags: tags:
- "IEEE TKDE" - "IEEE TKDE"
- "2025" - "2025"
@ -263,7 +263,7 @@ services:
- "IEEE, ACM member" - "IEEE, ACM member"
- "Secretary of IEEE (Denmark Section) Computer Society" - "Secretary of IEEE (Denmark Section) Computer Society"
- "Reviewer for journals including TIST, TII, and TVT" - "Reviewer for journals including TIST, TII, and TVT"
- "Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW" - "Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW"
blogs: blogs:
- title: "One Step Diffusion Models" - title: "One Step Diffusion Models"

13
dist/blog/index.html vendored
View file

@ -43,12 +43,16 @@
<div class="list-group list-group-flush"> <div class="list-group list-group-flush">
<div class="list-group-item px-0"> <div class="list-group-item px-0">
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span> <a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/one-step-diffusion-models.html">
One Step Diffusion Models <i class="bi bi-arrow-right-circle"></i>
</a> <span class="paper-title text-muted ms-2">May 2025</span>
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p> <p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
</div> </div>
<div class="list-group-item px-0"> <div class="list-group-item px-0">
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span> <a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/multi-modal-transformer.html">
Multi-modal and Multi-function Transformers <i class="bi bi-arrow-right-circle"></i>
</a> <span class="paper-title text-muted ms-2">April 2025</span>
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p> <p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
</div> </div>
@ -60,7 +64,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

18
dist/index.css vendored
View file

@ -244,4 +244,22 @@ footer {
padding: 1rem 0; padding: 1rem 0;
width: 100%; width: 100%;
flex-shrink: 0; flex-shrink: 0;
}
.dark-mode-text {
display: none;
}
.light-mode-text {
display: inline;
}
@media (prefers-color-scheme: dark) {
.dark-mode-text {
display: inline;
}
.light-mode-text {
display: none;
}
} }

17
dist/index.html vendored
View file

@ -97,7 +97,7 @@
</div> </div>
</div> </div>
<h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5> <h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5>
<p class="card-text mb-auto author-name">Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p> <p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
</div> </div>
@ -680,7 +680,7 @@
<li>Reviewer for journals including TIST, TII, and TVT</li> <li>Reviewer for journals including TIST, TII, and TVT</li>
<li>Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW</li> <li>Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW</li>
</ul> </ul>
</div> </div>
@ -694,12 +694,16 @@
<div class="list-group list-group-flush" id="blog-list"> <div class="list-group list-group-flush" id="blog-list">
<div class="list-group-item px-0"> <div class="list-group-item px-0">
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span> <a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/one-step-diffusion-models.html">
One Step Diffusion Models <i class="bi bi-arrow-right-circle"></i>
</a> <span class="paper-title text-muted ms-2">May 2025</span>
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p> <p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
</div> </div>
<div class="list-group-item px-0"> <div class="list-group-item px-0">
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span> <a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/multi-modal-transformer.html">
Multi-modal and Multi-function Transformers <i class="bi bi-arrow-right-circle"></i>
</a> <span class="paper-title text-muted ms-2">April 2025</span>
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p> <p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
</div> </div>
@ -711,7 +715,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

View file

@ -122,7 +122,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

View file

@ -173,7 +173,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

View file

@ -77,7 +77,7 @@
</div> </div>
</div> </div>
<h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5> <h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5>
<p class="card-text mb-auto author-name">Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p> <p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
</div> </div>
@ -459,7 +459,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

View file

@ -1,4 +1,4 @@
{ pkgs ? import <nixpkgs> {}, isDev ? true, restartRemote ? false, remoteHost ? "hetzner" }: { pkgs ? import <nixpkgs> {}, isDev ? true, restartRemote ? false }:
pkgs.mkShell { pkgs.mkShell {
packages = with pkgs; [ packages = with pkgs; [
@ -9,6 +9,7 @@ pkgs.mkShell {
shellHook = let shellHook = let
venvPath = "$HOME/venv/homepage"; venvPath = "$HOME/venv/homepage";
remoteHost = "hetzner";
in '' in ''
export PIP_REQUIRE_VIRTUALENV=1 export PIP_REQUIRE_VIRTUALENV=1
export VENV_PATH=${venvPath} export VENV_PATH=${venvPath}
@ -18,20 +19,21 @@ pkgs.mkShell {
fi fi
source $VENV_PATH/bin/activate source $VENV_PATH/bin/activate
pip install -r requirements.txt pip install -r requirements.txt
python parser/md.py
python generate.py
${if isDev then '' ${if isDev then ''
pip install watchdog pip install watchdog==6.0.0
python watch.py python watch.py && exit
'' else '' '' else ''
python parser/md.py
python generate.py
rsync -avP --delete ./dist/ ${remoteHost}:/root/homepage/dist rsync -avP --delete ./dist/ ${remoteHost}:/root/homepage/dist
rsync -avP ./docker-compose.yml ${remoteHost}:/root/homepage/ rsync -avP ./docker-compose.yml ${remoteHost}:/root/homepage/
${if restartRemote then '' ${if restartRemote then ''
ssh ${remoteHost} "cd /root/homepage && docker compose down && docker compose up -d" ssh ${remoteHost} "cd /root/homepage && docker compose down && docker compose up -d"
'' else ""} '' else ""}
exit
''} ''}
''; '';
} }

View file

@ -49,7 +49,10 @@
<footer> <footer>
<div class="container"> <div class="container">
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;"> <p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
Copyright © 2025. Designed and implemented by Yan Lin. <span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
<span class="mx-1">|</span>
Designed and implemented by Yan Lin.
<span class="mx-1">|</span> <span class="mx-1">|</span>
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a> <a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
</p> </p>

View file

@ -1,4 +1,6 @@
<div class="list-group-item px-0"> <div class="list-group-item px-0">
<a class="mb-1 paper-title blog-link" href="/blog/html/{{ blog.path }}.html">{{ blog.title }}</a> | <span class="paper-title text-muted">{{ blog.badge }}</span> <a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/{{ blog.path }}.html">
{{ blog.title }} <i class="bi bi-arrow-right-circle"></i>
</a> <span class="paper-title text-muted ms-2">{{ blog.badge }}</span>
<p class="card-text mb-auto tldr">{{ blog.tldr }}</p> <p class="card-text mb-auto tldr">{{ blog.tldr }}</p>
</div> </div>

View file

@ -8,14 +8,18 @@ class ChangeHandler(FileSystemEventHandler):
def on_modified(self, event): def on_modified(self, event):
if event.is_directory: if event.is_directory:
return return
if any(event.src_path.endswith(ext) for ext in ['.md', '.py', '.html', '.css', '.js']): if event.src_path.endswith('.html') and '/dist/' in event.src_path:
return
if any(event.src_path.endswith(ext) for ext in ['.md', '.py', '.html', '.css', '.js', '.yaml']):
print(f"File {event.src_path} has been modified") print(f"File {event.src_path} has been modified")
self.regenerate() self.regenerate()
def on_created(self, event): def on_created(self, event):
if event.is_directory: if event.is_directory:
return return
if any(event.src_path.endswith(ext) for ext in ['.md', '.py', '.html', '.css', '.js']): if event.src_path.endswith('.html') and '/dist/' in event.src_path:
return
if any(event.src_path.endswith(ext) for ext in ['.md', '.py', '.html', '.css', '.js', '.yaml']):
print(f"File {event.src_path} has been created") print(f"File {event.src_path} has been created")
self.regenerate() self.regenerate()
@ -28,8 +32,7 @@ class ChangeHandler(FileSystemEventHandler):
if __name__ == "__main__": if __name__ == "__main__":
event_handler = ChangeHandler() event_handler = ChangeHandler()
observer = Observer() observer = Observer()
# Watch both current directory and dist directory observer.schedule(event_handler, ".", recursive=True)
observer.schedule(event_handler, "templates", recursive=True)
observer.start() observer.start()
http_server = subprocess.Popen(["python", "-m", "http.server", "8000", "--directory", "dist"]) http_server = subprocess.Popen(["python", "-m", "http.server", "8000", "--directory", "dist"])