Hacker News new | past | comments | ask | show | jobs | submit login
Automated Stitching of Chip Images (bunniestudios.com)
73 points by lemper 8 days ago | hide | past | favorite | 14 comments





A few years ago, my dad working in chips control quality asked me how to do exactly this but with images from optical microscopes.

I can confirm what the post affirms that panorama stitcher softwares are not able to do the job. But what I found was that the opencv Stitcher class can do this perfectly out of the box. Unfortunately, there was no existing gui for the class at the time, so I quickly made one in 3 days: https://github.com/kwon-young/ImageStitcher

It would have been nice if the post had compared it's approach to the Stitcher class. Maybe the number of images or the size of the final image or the stitching error control cannot be sufficiently controled with the Stitcher class ?


It's really not clear from that readme what it's about, maybe more people would be interested if it was more descriptive and had a screenshot?

Yeah, I should really try to improve it... But it was just 3 days hacking to put a frontend on the opencv Stitch class and produce an exe that my dad could use.

What does it do that hugin doesn’t?

I don't recall finding hugin when I did my (short) research on image stitching tool. Thanks to you, I've read the scanned image stitching documentation and I suppose it could work.

However, the process seems quite complicated and slow, asking you to draw control points and all that.

Microscope pictures have the particularity that there are nearly no deformation to the image but you have a lot of them, so you want the process to be as automatic as possible. That was the goal of my tool, make the simplest gui anf process possible for the task at hand.


Hugin does have automatic feature detection. https://discuss.pixls.us/t/long-graffitti-from-raw-photos-wi...

Perhaps it doesn't work with the regular patterns of a chip, like the article mentioned.


Very cool I've been waiting for this part of the writeup. A few years ago I tried to write a similar microscope pano stitcher and used SIFT features in opencv, not sure I had come across template matching. I also struggled with blending and never got anything to work. From the blog's comments I want to check out http://abria.github.io/TeraStitcher/ next time I'm looking to do this. Seeing the final stitch results makes me wonder how accurate the census technique can be, especially against an adversary. Could they just layer a dummy chip on top?

The technique looks from the "bottom" side of the chip -- so the imaged elements are mostly "metal 1", e.g. the layer of metal that is directly connected to the transistors. Inserting a dummy layer in between the transistors and metal 1 would be a huge performance and density impact, I don't think it'd be practical.

That being said, the back side power delivery stuff that is currently in the pipe for the sub-"2nm" nodes would block viewing the transistors.


Is there a reason not to use X-ray for everything now? Dental and veterinary X-ray systems are under $1000 on Alibaba.

If you're stitching images together, that solves one of the primary problems with using these x-ray systems for semiconductor analysis.


I'm not sure of the physics reason why, but X-ray images don't have the resolution. You would think they would because X-rays have a much shorter wavelength, but I think it has to do with the fact that you're dealing with a point source and relying on materials to absorb the X-rays. So, with an X-ray, what you're seeing are projected shadows some distance away from the thing you're imaging. Those shadows are also convolved with all the layers between the layer of interest and the sensor, so, you get all the metal layers interacting with the light making it even harder to see a single metal layer.

Thus to get a high resolution of individual slices, you have to do something like ptychograpy or CT scanning, where you move the light source around to get a better idea of what's doing the absorption. These types of scanners are substantially more expensive than a dental X-ray.


Slight variations of contrast seem like they could be easily countered by stacking all pictures and averaging them. The strongest signal will be that which is present on each image, that is the brightness variation. Afterwards smooth out as needed and subtract from every image before stitching.

I'm surprised it wasn't mentioned, was it tried and found insufficient?


Actually there is some image stacking and averaging happening because there's a lot of overlap between the images, and some trials were done with 4x oversampling and averaging.

Probably the next thing to do is to put a diffuser on the LEDs to improve the uniformity of lighting. I think some of the hot-spotting has to do with the radiation pattern of the LED itself, if you just look at it on a blank sheet you can see a bit of a halo on the pattern.


wait, wasn't automated image stitching solved like decades ago? hugin, autopano, microsoft ICE...

The post has a section addressing this:

> At first one might think, “this is easy, just throw it into any number of image stitching programs used to generate panoramas!”. I thought that too.

> However, it turns out these programs perform poorly on images of chips. The most significant challenge is that chip features tend to be large, repetitive arrays.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: