Differential privacy allows to publish graph statistics in a way that protects individual privacy while stillallowing meaningful insights to be derived from the data. The centralized privacy model of differential privacyassumes that there is a trusted data curator, while the local model does not require such a trusted authority.Local differential privacy is commonly achieved through randomized response (RR) mechanisms. This doesnot preserve the sparseness of the graphs. As most of the real-world graphs are sparse and have several nodes,this is a drawback of RR-based mechanisms, in terms of computational efficiency and accuracy. We thus,propose a comparative analysis through experimental analysis and discussion, to compute statistics with localdifferential privacy, where, it is shown that preserving the sparseness of the original graphs is the key factorto gain that balance between utility and privacy. We perform several experiments to test the utility of theprotected graphs in terms of several sub-graph counting i.e. triangle, and star counting and other statistics. Weshow that the sparseness preserving algorithm gives comparable or better results in comparison to the otherstate of the art methods and improves computational efficiency.