After the user takes a photo, the app detects the RGB on the photo and pulls out the required blocks; then it uses the RGB correction algorithm to fill in the color of the picture;
The specific requirement is that after a photo is taken, the color cast must be detected and converted from RGB to Lab. Then use Lab to find out which color has deviation, and correct each color in the photo based on the value of this deviation. Now I am having difficulty in getting each color deviation through Lab. I hope you can give me some suggestions; thank you!
CIFilter or GPUImage can both be implemented
There is an App called Bacterial Analyzer. It is very similar to what you are talking about. It was developed by my friend. If it is similar, I can give you his contact information
Owner, has the problem been solved? I am also encountering this problem now. I want to get the RGB of UIimage and then operate its pixels, but I cannot get the RGB!