Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:
Resume:
1.- Ranking problem: Given images and a query, rank images by relevance to the query
2.- Standard ranking pipeline: Collect data, learn ranking model, assign scores, sort samples
3.- Evaluating ranking models: Use ranking measures like Average Precision (AP) or Normalized Discounted Cumulative Gain (NDCG)
4.- Learning model parameters: Popular to use differentiable loss functions like zero-one loss
5.- Optimizing rank-based loss functions directly can give better performance but is computationally expensive
6.- Expensive gradient computation procedure: Assign scores, determine candidate rankings, solve optimization problem
7.- Most violating ranking: Optimal solution to the optimization problem, needed for efficient gradient computation
8.- QS suitable loss functions: Rank-based loss functions amenable to efficient optimization
9.- Interleaving rank: Number of positive samples preceding a negative sample
10.- AP loss and NDCG loss are QS suitable
11.- Negative decomposability property: Loss additively decomposable onto negative samples
12.- Interleaving dependence property: Loss depends only on interleaving pattern of negatives and positives
13.- Multiple possible most violating rankings exist
14.- Partial ordering structure: Constraints on interleaving ranks based on scores
15.- Gradient computation steps: Induce partial ordering, find optimal interleaving pattern
16.- Baseline algorithm: Completely sort positives (p log p) and negatives (n log n), find interleaving (np)
17.- Quicksort-flavored algorithm: Sort positives (p log p), assign negatives optimal rank recursively (n log p)
18.- Negative samples between those with same rank get same rank for free
19.- Quicksort-flavored complexity: p log p + n log p + p log n ~ n log p
20.- Baseline complexity: n log n + pn, worse than quicksort-flavored
21.- Empirical performance on Pascal action classification: AP/NDCG loss improves performance, quicksort time comparable to zero-one loss
22.- Good scaling of quicksort algorithm compared to baseline as number of samples increases
23.- Weakly supervised object detection on Pascal VOC: AP loss improves mean performance >7%
24.- Training deep model on CIFAR-10: AP/NDCG loss improves performance, quicksort faster than baseline
25.- Optimizing rank-based loss good for ranking model performance
26.- But expensive to optimize rank-based loss in general
27.- QS suitable rank-based losses enable efficient optimization
28.- Improvement in performance without additional computational time
29.- Applicability to other ranking scores like F-score or mean reciprocity rank not yet explored
30.- Approach assumes zero-one labeling ground truth, extension to pairwise preferences not considered
Knowledge Vault built byDavid Vivancos 2024