Jump to content

Talk:Self-supervised learning

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Types

[edit]

Can someone explain how positive and negative examples are identified if the training data are unlabelled? — Preceding unsigned comment added by Colin Rowat (talkcontribs) 01:30, 26 December 2022 (UTC)[reply]

This whole article seems pretty poorly written. The 'types' section seem to be comparing non-contrastive with contrastive learning, but these are not the only two approaches to self-supervised learning, there are many others.
Anyway to answer your question, actually what happens here, at least in computer vision, is positive pairs are usually generated through data augmentation. So we might have a picture of a bird and then a blurred version of that picture - these two images become a positive pair so their representations should be close. A negative sample can be introduced by picking any other sample from the dataset, which hopefully isn't a bird but still might be. Then a contrastive loss would say the two augmented versions of the same image should be close to each other in latent space, while the negative version should be far from both these images. Liam.schoneveld (talk) 03:16, 13 August 2023 (UTC)[reply]

Hilo

[edit]

Hilo Hilo 209.35.174.215 (talk) 01:16, 22 December 2022 (UTC)[reply]

Reliability of using medium as a source

[edit]

The article lists a medium article [1] as a source. Medium is a blog and there is no peer review which implies rigor cannot be assumed by default. FYI I don't doubt or dispute the rigor of https://medium.com/@whats-ai the author who I understand is an expert on the subject, only that I think as a policy Wikipedia should have more rigorous sourcing on highly technical articles.

Thoughts? Hibiscus192255 (talk) 10:22, 2 May 2024 (UTC)[reply]