100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4.2 TrustPilot
logo-home
Resumen

Summary of paper A Metric Learning Reality Check

Puntuación
-
Vendido
-
Páginas
5
Subido en
05-07-2024
Escrito en
2023/2024

This is a summary of the paper A Metric Learning Reality Check for the course Seminar of Computer Vision by Deep Learning in TU Delft

Institución
Grado









Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Escuela, estudio y materia

Institución
Estudio
Grado

Información del documento

Subido en
5 de julio de 2024
Número de páginas
5
Escrito en
2023/2024
Tipo
Resumen

Temas

Vista previa del contenido

A Metric Learning Reality
Check
Deep metric learning papers from the past four years have consistently claimed
great advances in accuracy, often more than doubling the performance of
decade-old methods. This paper demonstrates the flaws in that.


Why metric learning is important
Metric learning attempts to map data to an embedding space, where similar
data are close together and dissimilar data are far apart.
This can be achieved by means of embedding and classification losses.
Embedding losses operate on the relationships between samples in a batch,
ensuring that similar samples are close together in the embedding space.
Classification loss involves a weight matrix that converts the embedding space
into class logits (scores), which are used to predict the class of the samples

Use of embeddings during test time:
During testing, embeddings are preferred over logits or softmax values,
especially in task like information retrival (e.g. image search). Here, the goal is
to find data most similar to a query. This is because embeddings capture the
similarity between data points directly.
Open-Set Classification:
In scenarios where the test set classes are different from the training set
classes, embeddings are useful for nearest neighobors voting or distance
thresholding. e.g. face verification and person re-identification.

Cases Where Classification Loss is Not Applicable
Lack of explicit labels. Instead, relative similariteis between samples are used.
This is where embedding losses come in, as there are no explicit labels to use
classification loss.


Embedding Losses



A Metric Learning Reality Check 1

, A classic pair based method is the contrastive loss which attempts to make the
distance between positive pairs below some threshold and the distance
between negative pairs above some threshold.
The theoretical downside is that the same distance threshold is applied to all
pairs, even though there may be a large variance in their similarities and
dissimilarities.
The triplet margin loss addresses this issue. Using an anchor, positive and
negative sample where the anchor is more similar to the positive than the
negativee. The triplet margin loss attempts to make the anchor-positive
distance smaller than the anchor-negative distances. It allows to account for
variance.


Classification Losses
Based on the inclusion of a weight matrix, where each column corresponds to a
particular class. Training consists of matrix multiplying the weights with
embedding vectors to obtain logits, and then applying a loss function to the
logits.

Pair and Triplet Mining
Mining is the process of finding the best pairs or triplets to train on. There are
two broad approaches to mining: offline and online. Offline is performed before
batch construction, so that each batch is made to contain the most informative
samples. This might be accomplished by storing lists of hard negatives, doing
nearest neighbors search before each epoch.
In contrast online mining finds hard pairs or triplets within each randomly
selected batch. Using all possible pairs or triplets is an alternative but has two
weaknesses.

1. Practically, it can consume a lot of memory

2. Theoretically it has the tendency to include a large number of easy
negatives and positives, causing performance to plateau quickly.


Advanced Training methods
To obtain higher accuracy, many recent papers have gone beyond loss
functions or mining techniques. For example, several recent methods
incorporate generator networks in their training procedure.




A Metric Learning Reality Check 2
$8.66
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada

Conoce al vendedor
Seller avatar
guillemribes

Conoce al vendedor

Seller avatar
guillemribes Technische Universiteit Delft
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
0
Miembro desde
1 año
Número de seguidores
0
Documentos
11
Última venta
-

0.0

0 reseñas

5
0
4
0
3
0
2
0
1
0

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes