Knowledge Vault 5 /20 - CVPR 2016
DenseCap: Fully Convolutional Localization Networks for Dense Captioning.
Justin Johnson, Andrej Karpathy, Li Fei-Fei
< Resume Image >

Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:

graph LR classDef captioning fill:#f9d4d4, font-weight:bold, font-size:14px classDef dataset fill:#d4f9d4, font-weight:bold, font-size:14px classDef prior fill:#d4d4f9, font-weight:bold, font-size:14px classDef new fill:#f9f9d4, font-weight:bold, font-size:14px classDef results fill:#f9d4f9, font-weight:bold, font-size:14px classDef misc fill:#d4f9f9, font-weight:bold, font-size:14px A[DenseCap: Fully Convolutional
Localization Networks for
Dense Captioning.] --> B[Dense captioning:
Detects image regions,
describes naturally. 1] A --> C[Visual Genome Region Captions dataset:
100K images,
5.4M region captions. 2] A --> D[Prior methods:
CNN extracts features,
RNN generates captions. 3] D --> E[Prior object detection RCNN:
Extracts proposals,
CNN predicts labels. 4] D --> F[Prior dense captioning:
Inefficient, lacks context,
uses CNN, RNN. 5] A --> G[New dense captioning:
Single model outputs
regions, captions. 6] G --> H[Efficient convolution,
recognition layers. 7] G --> I[Proposes regions using
anchor boxes. 8] G --> J[Aligns proposals,
increases match confidence. 9] G --> K[Bilinear interpolation for
end-to-end training. 10] G --> L[Final dense captioning:
CNN, localization, recognition,
RNN trained. 11] A --> M[Joint training losses:
Localization, recognition,
captioning. 12] M --> N[Better context,
efficient, end-to-end
training. 13] A --> O[Qualitative results:
Detects captions,
regions in images. 14] O --> P[Dense captioning metric:
Measures bounding box,
caption quality. 15] O --> Q[Efficiency:
Processes multiple frames
per second. 16] A --> R[Bonus:
Reverse model for
region retrieval. 17] R --> S[Region retrieval:
CNN, localization, recognition,
RNN rank regions. 18] R --> T[Region retrieval results:
Matches names, interactions,
some confusion. 19] A --> U[Released code:
Training/test code,
demo on GitHub. 20] class A,B captioning class C dataset class D,E,F prior class G,H,I,J,K,L new class M,N results class O,P,Q results class R,S,T results class U misc

Resume:

1.- Dense captioning: Jointly detecting image regions and describing them in natural language. Combines object detection's label density with image captioning's label complexity.

2.- Visual Genome Region Captions dataset: Over 100K images with 5.4M human-written region captions, averaging 50 regions per image, used to train dense captioning models.

3.- Prior image captioning: CNN extracts image features, RNN generates caption one word at a time conditioned on previous words.

4.- Prior object detection (RCNN): Region proposals extracted, cropped, processed by CNN to predict labels.

5.- Prior dense captioning pipeline: Inefficient, lacks context. Uses region proposals, crops them, processes with CNN, passes each to RNN.

6.- New end-to-end dense captioning: Single model takes image, outputs regions & captions. Trained end-to-end on Visual Genome data.

7.- Splitting CNN into convolutional layers & fully-connected recognition network, swapping order of convolution & cropping for efficiency.

8.- Localization layer: Proposes candidate regions on convolutional feature map grid using anchor boxes. Transforms anchors into region proposals.

9.- Training localization layer: Align proposals to ground truth. Increase confidence of matches, decrease others. Refine coordinates of matches.

10.- Bilinear interpolation (vs ROI pooling) for cropping: Enables backpropagation through box coordinates for end-to-end training.

11.- Final dense captioning architecture: CNN, localization layer, fully-connected recognition net, and RNN trained jointly end-to-end.

12.- Five joint training losses: Localization (box regression & classification), recognition corrections (box regression & classification), captioning.

13.- Benefits over prior work: Better context via large CNN receptive fields, efficient computation sharing, end-to-end region proposals & training.

14.- Qualitative results: Detects & captions salient regions (objects, parts, stuff) in Visual Genome test images and novel images.

15.- Dense captioning evaluation metric: Measures both bounding box and caption quality. Outperforms prior work by healthy margin.

16.- Efficiency: Processes multiple high-res frames per second on GPU, 13x faster than prior.

17.- Bonus: Reverse model for region retrieval given natural language query.

18.- Region retrieval method: Forward pass of CNN, localization & recognition. Rank by probability RNN generates query from region.

19.- Region retrieval results: Matches object names, interactions like "hands holding phone". Some confusion on specifics like front/back wheels.

20.- Released code & demo: Training/test code, AP metric, live webcam demo on GitHub.

Knowledge Vault built byDavid Vivancos 2024