Jahnavi Shah

M.Sc. Candidate, Geophysics & Planetary Science

Image processing

It’s been a while and I have a few research updates, so let’s dive right into it! 

Data collection

When I first started the project, I was gathering Sentinel-1 and ALOS data for impact craters in North America using Vertex (ASF’s data portal). As I got to the processing stage, I realized that many of the frames I had selected didn’t actually cover the area I needed. This was partly because Vertex doesn’t have a map scale so I don’t know how much area is covered when I outline the search box. However, I’ve learned to use google maps simultaneously and estimate the scale enough to make sure the radar data has a good coverage of the crater (and maybe even some peculiar surrounding features). Now I will have to go back and look for new data for some of the craters. Many of these craters (in North America) are larger in diameter which requires a few frames to be mosaiced and this was a little overwhelming. So I decided to focus on impact craters in South America because it’s a smaller dataset and the craters are relatively smaller (and most of them are exposed). I’ve collected radar data for all the craters and am currently in the process of processing them. 

I want to quickly mention a tool I came across two weeks ago when I was gathering data. One morning, the Vertex site was down when I really wanted to find relevant frames and download the data. I could search the portal but the system would not let me login/download it. So, I started adding the files to the queue and decided I would bulk download them (although I had not looked into how to do that). But, it turned out to be super easy! The site has a pre-written python script which you execute in the command terminal and you can download all the files in the queue. This has been super useful because I can spend my time during the day finding all the files and leaving the downloading for overnight. I just have to be cautious about the size of the files and space on my computer because the downloading stops midway. And then, I have to go through all the files and manually figure out which ones did not get downloaded and restart the process. Other than that, I find the bulk download option to be super helpful.

Issues with space and memory

I started off with image processing on my computer which didn’t work because of not enough memory space. So then I moved to Mike’s computer which allowed me to process everything but it takes 15-30 minutes for each step to run for the Sentinel-1 data (ALOS data takes about 5 minutes). Using this computer was definitely a good solution compared to no processing power on my workstation, however it’s still a significant amount of processing time. Thanks to Hun, we were able to test the image processing in Oz’s lab and found it takes about 2-4 minutes for each step in Sentinel-1 processing. Based on that, Catherine, thankfully, got me access to the lab to process my S1 data. One challenge with using the computers in that lab is that they keep shutting down/restarting randomly. Last week, when I was trying to process some data, it did not work out well. Another challenge is that there is not much space on those computers so I might have to transfer each file to an external drive as soon as it’s processed. I think this space/memory problem is a big one because eventually I’m going to run out of space on the drive and the server just with all the unprocessed data. For now, the 1 terabyte will do but am also brainstorming solutions for the near future. I wonder if getting a 5-10 TB drive might do the trick. I am open to ideas/comments/concerns. 

Image processing

Image processing is the part I really want to focus on. I sat in on a Digital Image Processing lecture last week (it’s a course offered by the ECE department). I am interested in the subject and considering taking the course next year so I thought it would be good to give it a test run. It turned out that the lecture I attended was focused on techniques that might be useful for my project, but also good to know in general. A few different filters were discussed in class:

1) Median filters: reduce salt and pepper noise with less blurring than spatial averaging. This filter is interesting because I wonder if this is what is used for the Speckle Filtering function in the Sentinel Application Platform (SNAP). I need to dig a little bit through the software’s documentation and the source code perhaps to figure it out. 

2) Sharpening filters: highlight fine detail (e.g. edges). The instructor mentioned this filter is very useful for radar images. He talked about his experience working with military radar data and using these filters to help identify things such as missiles. 

3) Gradient filters: good for edge detection but also magnify noise.   3.1)  Laplacian filter: highlights discontinuties (more than first- order derivatives). 

Next steps: I would like to apply these filters to some of the radar images and see what results we get. I’m not sure how effective it will be for radar images, but I’ll test it out. 

Image processing vs. signal processing

Sharpening filters are commonly used in signal processing and are usually very effective (GW example). However, sharpening filters in image processing require a bit more work on the user side. For instance, if I apply a sharpening filter to a radar image, there might be a lot of fine details that get highlighted. In that case, visual analysis doesn’t necessarily become easier. But would these filters be more effective if the images were digitally analyzed and subtle details/changed could be easily recognized? Is the numerical analysis of signals that makes these filters more effective? I’ll definitely have to read into this more but just want to ponder here a little bit. 

Here is an example of signal filtering that I did in a computational physics course. We analyzed the first gravitational wave detection event, called ‘GW150914’, data from LIGO. We applied a few filters in order to suppress the excess noise and highlight the event signal. Lastly, we converted the data to a sound file so that we could try to hear it (a frequency shift was applied to better hear the chirp signal –> audio equivalent of applying false coloour on telescope images, for example). Links to Hanford and Livingston signal sounds.

Next Post

Previous Post

Leave a Reply

© 2020 Jahnavi Shah

Theme by Anders Norén