Concept Graph & Resume using Claude 3 Opus | Chat GPT4o | Llama 3:
Resume:
1.- Inverse graphics: Analyzing scenes by inverting the graphics rendering process, going from images to scene descriptions.
2.- Challenges: Shape variability, appearance variability, and real-time inference are the main challenges in inverse graphics.
3.- Probabilistic programming: Adding random variables to graphics programs to create stochastic scene simulators for inference.
4.- Constrained simulation: Running probabilistic programs conditioned on test data to infer scene properties.
5.- Handling shape variability: Using rich forward graphics simulators and non-parametric statistical processes over 3D meshes.
6.- Appearance variability: Building cartoon models and projecting them with real data using deep networks for abstract comparison.
7.- Faster inference: Combining top-down inference methods with fast bottom-up recognition proposals.
8.- Helmholtz machines: Using top-down knowledge to train bottom-up discriminative pipelines, as proposed by Hinton et al.
9.- Sleep state: Hallucinating data from the probabilistic program and storing it in an external long-term memory.
10.- Program traces: Running a program once to get a stack with all variables and corresponding outputs.
11.- Function approximators: Training neural networks with partial values from program traces as targets.
12.- Structured long-term memory: Projecting hallucinated data using learned approximators to create a semantically structured memory.
13.- Inference with Helmholtz proposals: Sampling program traces from the structured memory region corresponding to a test image.
14.- Pattern matching and reasoning: Doing 90% pattern matching from memory and 10% reasoning for efficient inference.
15.- Conceptual framework: Scene language model, approximate renderer, representation layer, and score function.
16.- Human body pose example: Using an off-the-shelf 3D mesh in Blender for pose estimation.
17.- Combining discrete and continuous models: Integrating DPM pose models with top-down inference for improved results.
18.- 3D shape program example: Writing a flexible program to define a distribution over 3D meshes.
19.- Comparing with intrinsic images: Running Baron and Honek's intrinsic image method for comparison on a test table.
20.- Future work: Building a library of rich forward simulators and integrating with deep learning frameworks.
21.- Automatic differentiation engines: Developing fast AD engines with mixed CPU and GPU support for probabilistic programs.
22.- Deep integration of programs and neural networks: Training end-to-end systems with CNN encoders and differentiable probabilistic program decoders.
23.- Torch or Caffe integration: Implementing the proposed approach with close integration to deep learning frameworks.
24.- Learning beyond parameters: Exploring learning in the space of programs or subroutines.
25.- Exciting research direction: Developing models that combine deep neural networks with differentiable probabilistic programs.
Knowledge Vault built byDavid Vivancos 2024