2021-03-26 by P.A.Semi
Technical description of Solar image processing
Contents:
Eval Library,
Image sources,
Colors,
Sharpening methods,
Video encoding.
Eval library
For Solar image processing I use my own Eval
programming language and libraries.
It is built for win32 platform, but someone has reported using it on i386 Linux using Wines
emulator...
For me it is four-level programming, because the core interpretter is written in Pascal and Assembler,
above which are objects like Bitmap or Floatmap, written in highly optimized Pascal and Assembler,
above which are script libraries like FlowLib.lib,
and on top of that is an often simple script program,
which can be written usually on a single line of code
using eval.exe command-line interpretter...
Object Bitmap is 32-bit 2D bitmap with bytes 0..255 for red,green,blue,alpha (transparency).
Object FloatMap is 32-bit floating-point 2D map for a single channel...
While it's not limited to some specific range, usually a range 0..1 is used for image operations...
Image sources
Images use either JPEG sources from NASA SDO browse web-site,
or Fits files from JSOC/AIA
and JSOC/HMI at Stanford,
or Fits files from NSO...
Colors
For coloring data files, I use these palettes:
- Scale171N.xml is derived from older NASA SDO browse jpegs.
It uses:
R = IntGamma(Sqrt(x) * 227/32, -68.64);
G = IntGamma(Sqrt(x) * 157/32, -42.72);
B = IntGamma( Max(Sqrt(x)*0.25, Min( Power(x/4096,3) * 512, 3.14*(250-125*(1+Exp(-x/4096))) )), -60);
// function Exp(x) == e^x
function IntGamma(F,G) {
return 255 * Power(F/255, (256-G)/256 );
}
- For Magnetograms I use JsocHmi_ColorScale.xml
derived from HMI.MagColor.IDL_256.lut
from JSOC / Stanford
as described in HMI_M.ColorTable.pdf...
Sharpening methods
Multiple sharpening methods are used:
- For sharpening 1024x1024 Fits images, they are upscaled 2x and UnsharpMask is used:
This is now in FlowLib.lib as function Sharpen171() :
var w:=FMap.Width, h:=FMap.Height, bnResized;
if ((w<4096) && !bn4k) { // bn4k - source not upsized even if small...
bnResized:=true;
FMap.Resize(w*2,h*2,sfLinear);
}
// Sharpen:
var FMi:=FMap.Clone().Scale(0.5);
FMap.WeightMix(FMap.Clone().UsmBox(5,3),0.75);
FMap.MaxMap(FMi); // prevent too dark shadows around flares...
// Scale back to original size:
if (bnResized) FMap.Resize(w,h,sfLanczos);
For 4096x4096 original Fits files upscaling is not performed, since they contain too much noise...
UnsharpMask makes too dark shadows around too light places,
which is mitigated by limiting darkening at most to 0.5 original value...
Compare original
and sharpened...
- For sharpening 4096x4096 images, the sharpening is limited to lighter places:
This is now in FlowLib.lib as function Sharpen171_4k() :
var FMask:=FMap.DoG(5);
FMask.Scale(2*64/4096).Abs().Offset(-0.1).Clip(0,1);
FMask.BoxCar(3);
FMask.Scale(2).Clip(0,1);
var F2:=Sharpen171(FMap.Clone(),true);
FMap.WeightMix(F2,FMask);
Compare original
and sharpened...
- For clearing noise on 4k jpegs, following function seemed fine:
function ClearNoise4k(B) {
// in Bitmap, out Bitmap, operates on light of HSL
B:=HslOp(B,#f(Fh,Fs,Fl){
// Clear noise in darker parts:
var w:=Fl.Width,h:=Fl.Height;
var M1:=Fl.MakeRangeMask(0,0x50/0xff); // only darker parts below light 0x50
var Fd:=NFm(w,h);
Fl.LocalDiff(Fd,3); // extracts "noise"
Fd.MaskRange(-0.05,0.05); // only weak noise
Fd.Mask(M1); // only in darker parts
Fl.Subtract(Fd.Scale(0.5)); // subtracts noise
Fd:=null;
// sharpen only in darker parts:
var Fmin:=Fl.Clone().Scale(0.75), Fmax:=Fl.Clone().Scale(1.1);
var FMask:=NFm(w,h); FMask.Fill(0.75); FMask.Mask(M1); FMask.FillEmpty(0);
M1:=null;
Fl.WeightMix(Fl.Clone().UsmBox(3,2),FMask);
Fl.MaxMap(Fmin);
Fl.MinMap(Fmax);
Fmin:=Fmax:=FMask:=null;
// Remove light noise dots:
Fd:=NFm(w,h);
Fl.LocalDiff(Fd,5);
Fd.MaskRange(0x20/0xff,1); // where pixel differs from neighbourhood by 0x20
Fd.FillEmpty(0);
Fl.Subtract(Fd);
});
return B;
}
Compare step-1 sharpened
and cleared noise...
(This method was used for C5-shadows video, which has been later removed...)
- For sharpening NASA 2048x2048 jpegs,
as used in SDO/AIA 171 details from 2k jpegs videos,
I used a web-server module, since it took about 5 or 6 seconds per image,
there were tens of thousands of images to sharpen,
and there were boondoggle notebooks with four-core processors,
so they were processing images in paralel, image uploaded by POST,
sharpened image downloaded as a reply, queue running year from Jan 1st forward,
another from December 31st backward, another from October 1st forward,
another July and August in between, when they met they skipped what was ready
and finished these where web-servers failed for any reason...
The function used is now in FlowLib.lib as function Sharpen2_Bmp:
function Sharpen2_Bmp(B,SharpWeight,LightWeight) {
// from web/Sharp2.lib -- for secondary sharpening of 171/4k on bitmap/jpegs...
var w:=B.Width, h:=B.Height;
if (!w || !h) return null;
// use SharpWeight=1.414, LightWeight=2 for Sdo.171 2k Jpegs ...
SharpWeight:=Coalesce(SharpWeight,0.5);
LightWeight:=Coalesce(LightWeight,0);
//
var Fh:=NFm(w,h), Fs:=NFm(w,h), Fl:=NFm(w,h);
B.ExtractHsl(Fh,Fs,Fl); // HSL to float-maps...
//
Fl.Resize(w*2,h*2,sfLinear); // upscale
//
var FMask:=Fl.Clone(), Fl0;
var FMi:=Fl.Clone().Scale(0.5); // minimum level for sharpen darkening limit...
//
// only in darker parts:
FMask.Scale(-1,1).Sqr().Sqr().Scale(SharpWeight).Clip(0,1.5*SharpWeight);
if (LightWeight!=0) {
Fl0:=Fl.Clone();
}
//
Fl.WeightMix(Fl.Clone().UsmBox(3,3).UsmBox(5,3),FMask);
// in other parts:
if (LightWeight!=0) {
FMask.Scale(-1,1);
Fl.WeightMix(Fl0.UsmBox(3,LightWeight).UsmBox(5,LightWeight),FMask);
}
//
Fl.MaxMap(Fmi); // limit unsharp-mask darkening
//
Fl.Resize(w,h,sfLanczos); // downscale to original size
//
B.CombineHsl(Fh,Fs,Fl); // recombine
}
Compare original
(or here)
and sharpened...
- Preview video "Story of a filament"
uses Jpegs converted from JSOC 1024x1024 Fits files with Sharpen171() method,
and further uses Contrast 32 with pivot 64, and Gamma 64...
(It operates in floating-point to prevent getting sparse histogram due to rounding errors...)
var w:=B.Width,h:=B.Height;
var Fr:=NFm(w,h),Fg:=NFm(w,h),Fb:=NFm(w,h); // 3 float-maps
B.ExtractRgb(Fr,Fg,Fb);
Adj(Fr); Adj(Fg); Adj(Fb);
B.CombineRgb(Fr,Fg,Fb);
function Adj(F) {
F.Contrast(0.125,0.25); //F.Contrast(32/255,64/255);
F.IntGamma(64);
}
Compare
step-1 sharpened
with
contrast-gamma adjusted...
- For bigger version of "Story of a filament",
additionally to sharpening, the "space" noise reduction is used
to limit noise outside of Sun in free-space, which is amplified by the sharpening
needed for highlighting "inside" features...
Sharpening on 4096x4096 images has to be less "aggressive", because there is a lot of noise,
which is otherwise reduced by descaling to smaller images...
Hence function Sharpen171_4k() means "sharpen it less than Sharpen171()"...
You may compare
original (1),
result of Sharpen171_4k (2),
result of ReduceSpaceNoise applied to it (4),
and
result of ClearNoise4k applied to it (5)
...
For space-noise reduction, first a mask (3) is created
by descaling down to small size, performing Sobel edge detector, blurring and scaling
and resizing back to original size...
All places containing some "signal" and not only "noise" are preserved,
noise-only areas are then blurred with kernel (7,5,3,1,0,1,3,5,7) like an inverse-gaussian...
function ReduceSpaceNoise(F,ValueRange) {
// reduce noise on 171 outside limb (where there are no "details" and only noise...)
// ValueRange=2048 for AIA 171...
if (!ValueRange) {
var Stat:=F.GetStat();
if (Abs(Stat.Max)<=2) ValueRange:=1;
else ValueRange:=Min(8*Round(Stat.Avg),Stat.Max);
}
// Make Mask of blank spaces without edges when scaled to small image:
var w:=F.Width,h:=F.Height,i,
Fm:=F.Clone().Resize(Round(w/16),Round(h/16),sfLanczos);
Fm.Scale(1/ValueRange); // normalize to range cca 0 .. 1
Fm.FillEmpty(0);
Fm.Sobel(); // edge detect
Fm.Scale(5).Clip(0,1);
for(i=1;i<=3;i++) Fm.BoxCar(15); // simple blur
Fm.Scale(2).Clip(0,1);
Fm.Resize(w,h,sfLinear);
Fm.Scale(-1,1); // invert mask... 1 on blank spaces to noise-reduce, 0 on space to preserve
// use noise-reduced (blurred) only where masked:
F.WeightMix(NoiseReduce(F.Clone(),Fm),Fm);
return F;
function NoiseReduce(F2,Fm) {
var w:=F2.Width,h:=F2.Height;
Fm:=Fm.Clone().Clip(0.01,1);
F2.Multiply(Fm); // replace "preserved" spaces with black to not leak them into noise-reduced blank space...
Reduce2(F2);
F2.Divide(Fm);
return F2;
function Reduce2(F) {
// blur it a lot:
var w:=F.Width,h:=F.Height;
var F3:=NFm(w,h);
var V:=Vector(7,5,3,1,0,1,3,5,7), Sum:=V.Sum;
F.ConvolveX(F3,V);
F3.ConvolveY(F,V);
F.Scale( 1/(Sum*Sum) );
}
}
}
Previews resized to half resolution in html:
The effect of ReduceSpaceNoise is visible in left-bottom corner of top-left panel,
while the neighbouring loops demonstrate, that noise-reduction does not blur them...
Original: |
Sharpened: |
|
|
You may also compare Sobel edge detection on the preview tiles
of original (8)
and sharpened (9)
...
- Another example to compare sharpening method:
Original, Sharpened...
- Since sharpening amplifies also a noise, special care must be taken to denoise images...
Channel 304 is specially noisy, and since it is gradually darkening and needs amplification,
noise to signal ratio increases over time...
The following method Denoise304 I apply to channel 304 everywhere, and to other channels only
outside of Solar radius:
// MaxNoiseRange is 45 for channel 304, 100 for channels 171 and 211...
MaxNoiseRange:=Abs(Coalesce(MaxNoiseRange,45)); // value for 304 noise...
ZapLevel:=Coalesce(ZapLevel,0.75);
if (MaxNoiseRange==0) return F;
//
var w:=F.Width,h:=F.Height;
var Fd:=NFm(w,h);
F.LocalDiff(Fd,3);
//
Fd.MaskRange(-MaxNoiseRange,MaxNoiseRange).FillEmpty(0); // (-45,45)
//
Fd.Scale(ZapLevel); // 0.75
F.Subtract(Fd);
LocalDiff function puts in each pixel its difference from average of its 3x3 neighbourhood
and it is a sharper edge-detect than Sobel operator, which puts that difference
into neighbouring pixels instead...
This way it subtracts 0.75 of difference of a pixel from its neighbourhood,
little blurring the image and removing most of pixel-sized noise...
- Methods Render_PmHeq and PmMix3 for newer colored videos
will be published in FlowLib later, probably soon...
Color (hue & saturation) comes from combination
of 171,211,304 channels, and it is combined with lightness level
only from sharpened 171 channel using histogram equalization and sigmoid clipping...
- ... TODO-other?
Video encoding
For video encoding I use ffmpeg
and their H.264 codec via image2pipe ...
For example - when Jpegs are pre-generated for video:
eval.exe -lib MpegLib -v In=..\*.jpg "var V:=VideoEncoder({Width:1024,Height:1024,crf:24,fps:12});EnumFiles(In,#f(N,O){V.CopyFrom(OpenFile(O.FullName))})"
More complicated - when images need adding time-code, batch similar to this may be used:
set IN=..\*.jpg
set OUT=OutputFile.mp4
set START=var V:=VideoEncoder({Width:1024,Height:1024,crf:25,fps:12},'%OUT%');
set LOADCODE=var B:=LoadImage(O.FullName),D:=DetectFileDate(N);
set TEXTCODE=B.SetFont('DejaVu Serif',10,'white');B.TextOut(1,B.Height-B.LineHeight-1,FDt(D,'yyyy-mm-dd hh:nn'));
set OUTCODE=B.SaveToStream(V,'.jpg');
eval.exe -lib MpegLib -lib FlowLib "%START% EnumFiles('%IN%',#f(N,O){%LOADCODE% %TEXTCODE% %OUTCODE%})"
FFMpeg gets this command-line:
C:\Util\Video\FFmpeg\bin\ffmpeg.exe -framerate 12 -r 12 -f image2pipe -i - -movflags +faststart -framerate 12 -r 12 -s 1280x1024 -pix_fmt yuv420p -c:v libx264 -b:v 8388608 -crf 25 -r 12 -movflags +faststart "OutputFile.mp4"
(I'm not sure now why but it seemed to need to get same params multiple times...)