From 49a42a785fe42ab91edf01302a87e378b8126a75 Mon Sep 17 00:00:00 2001 From: Yan Lin Date: Tue, 3 Feb 2026 19:25:01 +0100 Subject: [PATCH] minor revision of section title --- content/dl4traj/self-supervised/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/dl4traj/self-supervised/index.md b/content/dl4traj/self-supervised/index.md index 1e3cb74..6be8002 100644 --- a/content/dl4traj/self-supervised/index.md +++ b/content/dl4traj/self-supervised/index.md @@ -283,7 +283,7 @@ Treating these as different views of the same trajectory encourages the model to To implement the framework, one or more encoders map the trajectory views into a shared embedding space where similarity can be computed. When views share the same representation format, a single encoder can process both views. When views use different representations, separate encoders are needed for each format, with their outputs projected to the same space. The encoder design choices are similar to those in auto-encoding: recurrent networks, Transformers, or temporal convolutional networks that produce fixed-size embeddings from variable-length sequences. -### Applications: Representation Learning for Downstream Tasks +### Applications: Trajectory Representation Learning The representations learned through contrastive learning serve similar purposes to those from auto-encoders: fixed-size embeddings for classification, similarity computation, and clustering. In practice, the difference in training objective can lead to different performance characteristics.