Small adjustments
This commit is contained in:
parent
01a9b1ac4d
commit
53c60f9add
11 changed files with 77 additions and 26 deletions
13
dist/blog/index.html
vendored
13
dist/blog/index.html
vendored
|
|
@ -43,12 +43,16 @@
|
|||
<div class="list-group list-group-flush">
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/one-step-diffusion-models.html">
|
||||
One Step Diffusion Models <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">May 2025</span>
|
||||
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
|
||||
</div>
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/multi-modal-transformer.html">
|
||||
Multi-modal and Multi-function Transformers <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">April 2025</span>
|
||||
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
|
||||
</div>
|
||||
|
||||
|
|
@ -60,7 +64,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
18
dist/index.css
vendored
18
dist/index.css
vendored
|
|
@ -244,4 +244,22 @@ footer {
|
|||
padding: 1rem 0;
|
||||
width: 100%;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.dark-mode-text {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.light-mode-text {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
@media (prefers-color-scheme: dark) {
|
||||
.dark-mode-text {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
.light-mode-text {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
17
dist/index.html
vendored
17
dist/index.html
vendored
|
|
@ -97,7 +97,7 @@
|
|||
</div>
|
||||
</div>
|
||||
<h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5>
|
||||
<p class="card-text mb-auto author-name">Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
|
||||
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
|
||||
</div>
|
||||
|
||||
|
||||
|
|
@ -680,7 +680,7 @@
|
|||
|
||||
<li>Reviewer for journals including TIST, TII, and TVT</li>
|
||||
|
||||
<li>Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW</li>
|
||||
<li>Member of program committees of KDD, ICLR, NeurIPS, AAAI, CVPR, ICCV, IJCAI, and WWW</li>
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
|
|
@ -694,12 +694,16 @@
|
|||
<div class="list-group list-group-flush" id="blog-list">
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/one-step-diffusion-models.html">
|
||||
One Step Diffusion Models <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">May 2025</span>
|
||||
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
|
||||
</div>
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/multi-modal-transformer.html">
|
||||
Multi-modal and Multi-function Transformers <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">April 2025</span>
|
||||
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
|
||||
</div>
|
||||
|
||||
|
|
@ -711,7 +715,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
5
dist/presentations/index.html
vendored
5
dist/presentations/index.html
vendored
|
|
@ -122,7 +122,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
5
dist/projects/index.html
vendored
5
dist/projects/index.html
vendored
|
|
@ -173,7 +173,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
7
dist/publications/index.html
vendored
7
dist/publications/index.html
vendored
|
|
@ -77,7 +77,7 @@
|
|||
</div>
|
||||
</div>
|
||||
<h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5>
|
||||
<p class="card-text mb-auto author-name">Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
|
||||
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
|
||||
</div>
|
||||
|
||||
|
||||
|
|
@ -459,7 +459,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue