746 lines
No EOL
36 KiB
HTML
746 lines
No EOL
36 KiB
HTML
<!DOCTYPE html>
|
|
<html lang="en">
|
|
|
|
<head>
|
|
<meta charset="UTF-8">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
<title>Yan Lin's Homepage</title>
|
|
<link rel="icon" href="/logo.webp" type="image/x-icon">
|
|
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
|
|
<link href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.7.2/font/bootstrap-icons.css" rel="stylesheet">
|
|
<link rel="stylesheet" href="/index.css">
|
|
|
|
</head>
|
|
|
|
<body>
|
|
<main class="container">
|
|
|
|
<header class="border-bottom lh-1 pt-3 pb-0 border-secondary">
|
|
|
|
<div class="row flex-nowrap justify-content-between align-items-center">
|
|
<div class="col-2">
|
|
|
|
|
|
<a class="link-secondary header-icon px-2 h4" href="mailto:s@yanlincs.com"><i class="bi bi-envelope-fill"></i></a>
|
|
|
|
|
|
</div>
|
|
<div class="col-8 text-center">
|
|
<div class="page-header-logo h2 m-0 fw-bold" style="font-family: 'Abril Fatface', serif;">Yan Lin's Homepage</div>
|
|
</div>
|
|
<div class="col-2 text-end">
|
|
|
|
|
|
<a class="link-secondary header-icon px-2 h4" href="https://lab.yanlincs.com"><i class="bi bi-stack"></i></a>
|
|
|
|
|
|
</div>
|
|
</div>
|
|
|
|
<nav class="navbar navbar-expand">
|
|
<ul class="navbar-nav d-flex justify-content-evenly mx-auto gap-3 gap-md-5">
|
|
<li class="nav-item">
|
|
<a class="link nav-link px-0" href="/#publications">Publications</a>
|
|
</li>
|
|
<li class="nav-item">
|
|
<a class="link nav-link px-0" href="/#projects">Projects</a>
|
|
</li>
|
|
<li class="nav-item">
|
|
<a class="link nav-link px-0" href="/#presentations">Presentations</a>
|
|
</li>
|
|
<li class="nav-item">
|
|
<a class="link nav-link px-0" href="/#services">Services</a>
|
|
</li>
|
|
</ul>
|
|
</nav>
|
|
|
|
</header>
|
|
|
|
|
|
<div class="row g-0 border rounded text-body-emphasis bg-body-secondary flex-md-row my-4 position-relative shadow-sm transition-shadow" style="transition: box-shadow 0.2s ease-in-out;" onmouseover="this.classList.remove('shadow-sm'); this.classList.add('shadow')" onmouseout="this.classList.remove('shadow'); this.classList.add('shadow-sm')">
|
|
<div class="col p-4 d-flex flex-column position-static">
|
|
<h2 class="fst-italic mb-3">Biography - Yan Lin</h2>
|
|
<p class="card-text mb-auto" style="font-size: 1.1rem;">
|
|
I am currently a postdoctoral researcher in the Department of Computer Science at Aalborg University.
|
|
I received my PhD and Bachelor's degrees from Beijing Jiaotong University, China.
|
|
My research interests include <i>spatiotemporal data mining</i>, <i>representation learning</i>, and <i>AI for science</i>.
|
|
</p>
|
|
</div>
|
|
<div class="col-5 col-xl-4 col-xxl-3 d-none d-lg-block d-flex align-items-center">
|
|
<img src="/profile.webp" alt="Yan Lin" class="rounded w-100" style="object-fit: contain;">
|
|
</div>
|
|
</div>
|
|
|
|
<article class="section" id="publications">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<h2 class="section-title d-inline-block mb-0"><i class="bi bi-book"></i> Publications</h2>
|
|
<a class="mb-0 link link-secondary link-underline-opacity-0 h5" href="/publications/">View All <i class="bi bi-arrow-right-circle"></i></a>
|
|
</div>
|
|
<div>
|
|
<div id="primary-publications" class="list-group list-group-flush">
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2402.07232" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/UVTM" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">UVTM: Universal Vehicle Trajectory Modeling with ST Feature Domain Generation</h5>
|
|
<p class="card-text mb-auto author-name">Yan Lin, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
IJCAI<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2405.12459" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Zeru19/PLM4Traj" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">TrajCogn: Leveraging LLMs for Cognizing Movement Patterns and Travel Purposes from Trajectories</h5>
|
|
<p class="card-text mb-auto author-name">Zeyu Zhou*, <strong>Yan Lin*</strong>, Haomin Wen, Shengnan Guo, Jilin Hu, Youfang Lin, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ieeexplore.ieee.org/document/10818577" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2407.12550" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/UniTE" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">UniTE: A Survey and Unified Pipeline for Pre-training Spatiotemporal Trajectory Embeddings</h5>
|
|
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Zeyu Zhou, Yicheng Liu, Haochen Lv, Haomin Wen, Tianyi Li, Yushuai Li, Christian S. Jensen, Shengnan Guo, Youfang Lin, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
WWW<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://openreview.net/forum?id=KmMSQS6tFn" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/decisionintelligence/Path-LLM" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Path-LLM: A Multi-Modal Path Representation Learning by Aligning and Fusing with Large Language Models</h5>
|
|
<p class="card-text mb-auto author-name">Yongfu Wei*, <strong>Yan Lin*</strong>, Hongfan Gao, Ronghui Xu, Sean Bin Yang, Jilin Hu</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
AAAI<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2408.12809" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">DutyTTE: Deciphering Uncertainty in Origin-Destination Travel Time Estimation</h5>
|
|
<p class="card-text mb-auto author-name">Xiaowei Mao*, <strong>Yan Lin*</strong>, Shengnan Guo, Yubin Chen, Xingyu Xian, Haomin Wen, Qisen Xu, Youfang Lin, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
NeurIPS<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://openreview.net/forum?id=0feJEykDRx" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://neurips.cc/virtual/2024/poster/96914" target="_blank" rel="noopener noreferrer">Poster</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Mobility-LLM: Learning Visiting Intentions and Travel Preference from Human Mobility Data with Large Language Models</h5>
|
|
<p class="card-text mb-auto author-name">Letian Gong*, <strong>Yan Lin*</strong>, Xinyue Zhang, Yiwen Lu, Xuedi Han, Yichen Liu, Shengnan Guo, Youfang Lin, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
SIGMOD<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://dl.acm.org/doi/10.1145/3617337" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2307.03048" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/DOT" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Origin-Destination Travel Time Oracle for Map-based Services</h5>
|
|
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Huaiyu Wan, Jilin Hu, Shengnan Guo, Bin Yang, Christian S. Jensen, Youfang Lin</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2023
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ieeexplore.ieee.org/abstract/document/10375102" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2207.14539" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/MMTEC" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Pre-training General Trajectory Embeddings with Maximum Multi-view Entropy Coding</h5>
|
|
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Huaiyu Wan, Shengnan Guo, Jilin Hu, Christian S. Jensen, Youfang Lin</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2022
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ieeexplore.ieee.org/abstract/document/9351627" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/TALE" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Pre-training Time-aware location embeddings from spatial-temporal trajectories</h5>
|
|
<p class="card-text mb-auto author-name">Huaiyu Wan, <strong>Yan Lin</strong>, Shengnan Guo, Youfang Lin</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
AAAI<span class='text-muted'> | </span>2021
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ojs.aaai.org/index.php/AAAI/article/view/16548" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Logan-Lin/CTLE" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Pre-training Context and Time Aware Location Embeddings from Spatial-Temporal Trajectories for User Next Location Prediction</h5>
|
|
<p class="card-text mb-auto author-name"><strong>Yan Lin</strong>, Huaiyu Wan, Shengnan Guo, Youfang Lin</p>
|
|
</div>
|
|
|
|
|
|
</div>
|
|
<hr class="my-2">
|
|
<div id="secondary-publications" class="list-group list-group-flush">
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
KDD<span class='text-muted'> | </span>2025
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2412.10859" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/decisionintelligence/DUET" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">DUET: Dual Clustering Enhanced Multivariate Time Series Forecasting</h5>
|
|
<p class="card-text mb-auto author-name">Xiangfei Qiu, Xingjian Wu, <strong>Yan Lin</strong>, Chenjuan Guo, Jilin Hu, Bin Yang</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.computer.org/csdl/journal/tk/5555/01/10679607/20b3hlbjBOo" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2402.07369" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/wtl52656/Diff-RNTraj" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Diff-RNTraj: A Structure-aware Diffusion Model for Road Network-constrained Trajectory Generation</h5>
|
|
<p class="card-text mb-auto author-name">Tonglong Wei, Youfang Lin, Shengnan Guo, <strong>Yan Lin</strong>, Yiheng Huang, Chenyang Xiang, Yuqing Bai, Menglu Ya, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ieeexplore.ieee.org/document/10836764" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">STCDM: Spatio-Temporal Contrastive Diffusion Model for Check-In Sequence Generation</h5>
|
|
<p class="card-text mb-auto author-name">Letian Gong, Shengnan Guo, <strong>Yan Lin</strong>, Yichen Liu, Erwen Zheng, Yiwei Shuang, Youfang Lin, Jilin Hu, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.computer.org/csdl/journal/tk/5555/01/10517676/1WCj0j0FljW" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2404.19141" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/wtl52656/MM-STGED" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Micro-Macro Spatial-Temporal Graph-based Encoder-Decoder for Map-Constrained Trajectory Recovery</h5>
|
|
<p class="card-text mb-auto author-name">Tonglong Wei, Youfang Lin, <strong>Yan Lin</strong>, Shengnan Guo, Lan Zhang, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
KBS<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.sciencedirect.com/science/article/pii/S0950705123010730" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/wtl52656/IAGCN" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Inductive and Adaptive Graph Convolution Networks Equipped with Constraint Task for Spatial-Temporal Traffic Data Kriging</h5>
|
|
<p class="card-text mb-auto author-name">Tonglong Wei, Youfang Lin, Shengnan Guo, <strong>Yan Lin</strong>, Yiji Zhao, Xiyuan Jin, Zhihao Wu, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
IEEE TKDE<span class='text-muted'> | </span>2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2407.15899" target="_blank" rel="noopener noreferrer">Preprint</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Spatial-Temporal Cross-View Contrastive Pre-Training for Check-in Sequence Representation Learning</h5>
|
|
<p class="card-text mb-auto author-name">Letian Gong, Huaiyu Wan, Shengnan Guo, Li Xiucheng, <strong>Yan Lin</strong>, Erwen Zheng, Tianyi Wang, Zeyu Zhou, Youfang Lin</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
AAAI<span class='text-muted'> | </span>2023
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://ojs.aaai.org/index.php/AAAI/article/view/25546" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/LetianGong/CACSR" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Contrastive Pre-training with Adversarial Perturbations for Check-In Sequence Representation Learning</h5>
|
|
<p class="card-text mb-auto author-name">Letian Gong, Youfang Lin, Shengnan Guo, <strong>Yan Lin</strong>, Tianyi Wang, Erwen Zheng, Zeyu Zhou, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
ESWA<span class='text-muted'> | </span>2023
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.sciencedirect.com/science/article/pii/S0957417423012241" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Adversarial Self-Attentive Time-Variant Neural Networks for Multi-Step Time Series Forecasting</h5>
|
|
<p class="card-text mb-auto author-name">Changxia Gao, Ning Zhang, Youru Li, <strong>Yan Lin</strong>, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
APIN<span class='text-muted'> | </span>2023
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://link.springer.com/article/10.1007/s10489-023-05057-7" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Multi-scale Adaptive Attention-based Time-Variant Neural Networks for Multi-step Time Series Forecasting</h5>
|
|
<p class="card-text mb-auto author-name">Changxia Gao, Ning Zhang, Youru Li, <strong>Yan Lin</strong>, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
NeurIPS<span class='text-muted'> | </span>2023
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://openreview.net/forum?id=y08bkEtNBK" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/Water2sea/WITRAN" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">WITRAN: Water-wave Information Transmission and Recurrent Acceleration Network for Long-range Time Series Forecasting</h5>
|
|
<p class="card-text mb-auto author-name">Yuxin Jia, Youfang Lin, Xinyan Hao, <strong>Yan Lin</strong>, Shengnan Guo, Huaiyu Wan</p>
|
|
</div>
|
|
|
|
|
|
</div>
|
|
</div>
|
|
<div class="text-start mt-1">
|
|
<small class="text-muted" style="font-size: 0.8rem;">* Equal Contribution</small>
|
|
</div>
|
|
</article>
|
|
|
|
<article class="section" id="projects">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<h2 class="section-title d-inline-block mb-0"><i class="bi bi-code-slash"></i> Projects</h2>
|
|
<a class="mb-0 link link-secondary link-underline-opacity-0 h5" href="/projects/">View All <i class="bi bi-arrow-right-circle"></i></a>
|
|
</div>
|
|
<div>
|
|
<div id="primary-projects" class="list-group list-group-flush">
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Fundamental Research Funds for the Central Universities of China
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Research on <i>Prediction of User Travel Destination and Travel Time Based on Trajectory Representation Learning</i></h5>
|
|
<p class="card-text mb-auto project-desc">Applies representation learning to trajectory data to transform original features into high-level information, improving the performance of downstream tasks such as travel time and destination prediction.</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Personal Interest Project
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.overleafcopilot.com/" target="_blank" rel="noopener noreferrer">Home</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://chromewebstore.google.com/detail/overleaf-copilot/eoadabdpninlhkkbhngoddfjianhlghb" target="_blank" rel="noopener noreferrer">Install</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Development of <i>OverleafCopilot - Empowering Academic Writing in Overleaf with Large Language Models</i></h5>
|
|
<p class="card-text mb-auto project-desc">This project aims to develop a Browser extension to seamlessly integrate Large Language Models (such as ChatGPT) into the popular online academic writing platform, Overleaf.</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Personal Interest Project
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://www.promptgenius.site/" target="_blank" rel="noopener noreferrer">Website</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://github.com/wenhaomin/ChatGPT-PromptGenius" target="_blank" rel="noopener noreferrer">Code</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Development of <i>PromptGenius - All-purpose prompts for LLMs</i></h5>
|
|
<p class="card-text mb-auto project-desc">This project focuses on developing a website that offers a wide range of prompt categories, enhancing the versatility of LLMs for various tasks and improving their output quality.</p>
|
|
</div>
|
|
|
|
|
|
</div>
|
|
<hr class="my-2">
|
|
<div id="secondary-projects" class="list-group list-group-flush">
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
Villum Foundation
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Research on <i>Inverse Design of Materials Using Diffusion Probabilistic Models</i></h5>
|
|
<p class="card-text mb-auto project-desc">This project focuses on developing diffusion probabilistic models to first understand the relationship between chemistry/structure and material properties, then enable the inverse design of new materials with specific properties. This project currently supports my postdoctoral research position.</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
National Natural Science Foundation of China
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Research on <i>Pre-training Representation Learning Methods of Spatial-temporal Trajectory Data for Traffic Prediction</i></h5>
|
|
<p class="card-text mb-auto project-desc">This project aims to propose pre-training representation learning methods for spatial-temporal trajectory data, modeling multiple features to improve traffic prediction tasks. It demonstrates how trajectory representation learning can enhance traffic data mining.</p>
|
|
</div>
|
|
|
|
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name secondary-text">
|
|
National Natural Science Foundation of China
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Research on <i>Spatial-temporal Trajectory Generation and Representation Learning Methods for Sparsity Problems</i></h5>
|
|
<p class="card-text mb-auto project-desc">This project explores how to generate high-quality spatial-temporal trajectory data and corresponding representations to address sparsity-related issues, thereby supporting a variety of downstream tasks.</p>
|
|
</div>
|
|
|
|
|
|
</div>
|
|
</div>
|
|
</article>
|
|
|
|
<article class="section" id="presentations">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<h2 class="section-title d-inline-block mb-0"><i class="bi bi-easel"></i> Presentations</h2>
|
|
<a class="mb-0 link link-secondary link-underline-opacity-0 h5" href="/presentations/">View All <i class="bi bi-arrow-right-circle"></i></a>
|
|
</div>
|
|
<div class="list-group list-group-flush" id="presentation-list">
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Guest lecture<span class='text-muted'> | </span>Aalborg University
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="/assets/Self-supervised Learning of Trajectory Data.pdf" target="_blank" rel="noopener noreferrer">Slides</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Self-supervised Learning of Trajectory Data</h5>
|
|
</div>
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Workshop presentation<span class='text-muted'> | </span>KDD 2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="/assets/KDD_2024_Workshop_PLM4Traj.pdf" target="_blank" rel="noopener noreferrer">Slides</a>
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="https://arxiv.org/abs/2405.12459" target="_blank" rel="noopener noreferrer">Paper</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">PLM4Traj: Leveraging Pre-trained Language Models for Cognizing Movement Patterns and Travel Purposes from Trajectories</h5>
|
|
</div>
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Paper Oral<span class='text-muted'> | </span>SIGMOD 2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="/assets/SIGMOD-Oral-PPT.pdf" target="_blank" rel="noopener noreferrer">Slides</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Origin-Destination Travel Time Oracle for Map-based Services</h5>
|
|
</div>
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Tutorial<span class='text-muted'> | </span>SpatialDI 2024
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="/assets/Talk on SpatialDI 2024.pdf" target="_blank" rel="noopener noreferrer">Slides</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Self-supervised Learning of Spatial-temporal Trajectories</h5>
|
|
</div>
|
|
|
|
<div class="list-group-item px-0">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<p class="d-inline-block mb-0 venue-name primary-text">
|
|
Paper Oral<span class='text-muted'> | </span>AAAI 2021
|
|
</p>
|
|
<div class="d-flex gap-2">
|
|
|
|
<a class="link icon-link icon-link-hover paper-link link-secondary" href="/assets/AAAI21 Oral PPT.pdf" target="_blank" rel="noopener noreferrer">Slides</a>
|
|
|
|
</div>
|
|
</div>
|
|
<h5 class="mb-1 paper-title">Pre-training Context and Time Aware Location Embeddings from Spatial-Temporal Trajectories for User Next Location Prediction</h5>
|
|
</div>
|
|
|
|
</div>
|
|
</article>
|
|
|
|
<article id="services" class="rounded text-body-emphasis bg-body-secondary flex-md-row my-4 position-relative p-4 transition-shadow" style="transition: box-shadow 0.2s ease-in-out;" onmouseover="this.classList.add('shadow-sm')" onmouseout="this.classList.remove('shadow-sm')">
|
|
<h2 class="mb-3"><i class="bi bi-person-lines-fill"></i> Services</h2>
|
|
<div id="service-list">
|
|
<ul class="list ps-3">
|
|
|
|
<li>IEEE, ACM member</li>
|
|
|
|
<li>Secretary of IEEE (Denmark Section) Computer Society</li>
|
|
|
|
<li>Reviewer for journals including TIST, TII, and TVT</li>
|
|
|
|
<li>Member of program committees of ICLR, KDD, AAAI, CVPR, ICCV, IJCAI, and WWW</li>
|
|
|
|
</ul>
|
|
</div>
|
|
</article>
|
|
|
|
<article class="section" id="blog">
|
|
<div class="d-flex justify-content-between align-items-center mb-1">
|
|
<h2 class="section-title d-inline-block mb-0"><i class="bi bi-newspaper"></i> Blog</h2>
|
|
<a class="mb-0 link link-secondary link-underline-opacity-0 h5" href="/blog/">View All <i class="bi bi-arrow-right-circle"></i></a>
|
|
</div>
|
|
<div class="list-group list-group-flush" id="blog-list">
|
|
|
|
<div class="list-group-item px-0">
|
|
<a class="mb-1 paper-title blog-link" href="/blog/html/one-step-diffusion-models.html">One Step Diffusion Models</a> | <span class="paper-title text-muted">May 2025</span>
|
|
<p class="card-text mb-auto tldr">Despite the promising performance of diffusion models on continuous modality generation, one deficiency that is holding them back is their requirement for multi-step denoising processes, which can be computationally expensive. In this article, we examine recent works that aim to build diffusion models capable of performing sampling in one or a few steps.</p>
|
|
</div>
|
|
|
|
<div class="list-group-item px-0">
|
|
<a class="mb-1 paper-title blog-link" href="/blog/html/multi-modal-transformer.html">Multi-modal and Multi-function Transformers</a> | <span class="paper-title text-muted">April 2025</span>
|
|
<p class="card-text mb-auto tldr">Multi-modal and multi-function Transformers enables a single architecture to process diverse data types such as language, images, and videos simultaneously. These models employ techniques like vector quantization and lookup-free quantization to map different modalities into a unified embedding space, allowing the Transformer to handle them within the same sequence. Beyond processing multiple data types, these architectures can also combine different functionalities-such as auto-regressive language generation and diffusion-based image creation-within a single model.</p>
|
|
</div>
|
|
|
|
</div>
|
|
</article>
|
|
|
|
</main>
|
|
|
|
<footer>
|
|
<div class="container">
|
|
<p class="text-center text-secondary" style="font-size: 0.8rem; font-family: 'Lato', sans-serif;">
|
|
Copyright © 2025. Designed and implemented by Yan Lin.
|
|
</p>
|
|
</div>
|
|
</footer>
|
|
|
|
<button id="back-to-top" class="btn btn-light rounded-circle" style="position: fixed; bottom: 20px; right: 20px; display: none; z-index: 1000; width: 40px; height: 40px; padding: 0;"><i class="bi bi-chevron-up"></i></button>
|
|
|
|
|
|
|
|
<script>
|
|
// Show or hide the back-to-top button
|
|
window.addEventListener('scroll', function() {
|
|
var backToTopButton = document.getElementById('back-to-top');
|
|
if (window.scrollY > 100) {
|
|
backToTopButton.style.display = 'block';
|
|
} else {
|
|
backToTopButton.style.display = 'none';
|
|
}
|
|
});
|
|
|
|
// Scroll to top when the button is clicked
|
|
document.getElementById('back-to-top').addEventListener('click', function(e) {
|
|
e.preventDefault();
|
|
window.scrollTo({
|
|
top: 0,
|
|
behavior: 'smooth'
|
|
});
|
|
window.location.href = '#';
|
|
return false;
|
|
});
|
|
</script>
|
|
|
|
|
|
</body>
|
|
|
|
</html> |