The feasibility of using conditional GANs (Generative Adversarial Networks) to predict gripability in log piles is investigated. This is done by posing gripability heatmap prediction from RGB-D data as an image-to-image translation problem. Conditional GANs have previously achieved impressive results on several image-to-image translation tasks predicting physical properties and adding details not present in the input images. Here, piles of logs modelled as sticks or rods are generated in simulation, and groundtruth gripability maps are created using a simple algorithm streamlining the datacollection process. A modified SSIM (Structural Similarity Index) is used to evaluate the quality of the gripability heatmap predictions. The results indicate promising model performance on several different datasets and heatmap designs, including using base plane textures from a real forest production site to add realistic noise in the RGB data. Including a depth channel in the input data is shown to increase performance compared to using pure RGB data. The implementation is based on the general Pix2Pix network developed by Isola et al. in 2017. However, there is potential to increase performance and model generalization, and the adoption of more advanced loss functions and network architectures are suggested. Next steps include using terrains reconstructed from highdensity laser scans in physics-based simulation for data generation. A more in-depth discussion regarding the level of sophistication required in the gripability heatmaps should also be carried out, along with discussions regarding other specifications that will be required for future deployment. This will enable derivation of a tailored gripability metric for ground-truth heatmap generation, and method evaluation on less ideal data.