Small adjustments
This commit is contained in:
parent
01a9b1ac4d
commit
53c60f9add
11 changed files with 77 additions and 26 deletions
13
dist/blog/index.html
vendored
13
dist/blog/index.html
vendored
|
|
@ -43,12 +43,16 @@
|
|||
<div class="list-group list-group-flush">
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/one-step-diffusion-models.html">
|
||||
One Step Diffusion Models <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">May 2025</span>
|
||||
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
|
||||
</div>
|
||||
|
||||
<div class="list-group-item px-0">
|
||||
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span>
|
||||
<a class="mb-1 paper-title blog-link text-decoration-none" href="/blog/html/multi-modal-transformer.html">
|
||||
Multi-modal and Multi-function Transformers <i class="bi bi-arrow-right-circle"></i>
|
||||
</a> <span class="paper-title text-muted ms-2">April 2025</span>
|
||||
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
|
||||
</div>
|
||||
|
||||
|
|
@ -60,7 +64,10 @@
|
|||
<footer>
|
||||
<div class="container">
|
||||
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
||||
Copyright © 2025. Designed and implemented by Yan Lin.
|
||||
<span class="dark-mode-text"><i class="bi bi-moon-fill"></i> ずっと真夜中でいいのに。</span>
|
||||
<span class="light-mode-text"><i class="bi bi-sun-fill"></i> ずっと正午でいいのに。</span>
|
||||
<span class="mx-1">|</span>
|
||||
Designed and implemented by Yan Lin.
|
||||
<span class="mx-1">|</span>
|
||||
<a class="link link-secondary" target="_blank" href="https://git.yanlincs.com/yanlin/Homepage">Source Code</a>
|
||||
</p>
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue