匹配 - 自学和后续对象搜索

<< 点击显示目录 >>

主页  mappVision帮助助手 > 机器视觉帮助 > mapp Vision  > 用例 >

匹配 - 自学和后续对象搜索

本节介绍为标准应用配置视觉功能匹配的基本方法。

要求

定义NumSearchMax

MinScore设为一半左右。

根据情况定义MaxOverlap(请参阅图像边框注释)。

根据实际情况定义ShapeSearchBorderShapeModels(请参阅图像边框注释)

选择匹配类型(NCC 或基于表格)。

示教

定义示教的兴趣区域(图像中的区域模型)。

信息:

强烈建议使用 "橡皮擦 "工具(减法)将区域缩小到所需面积或去除不必要的边缘!

通过模型角度起点(ModelAngleStart)和模型角度范围ModelAngleExtent)定义可能的 模型旋转(相对于教入的参考模型)。

使用SearchAngleStartSearchAngleExtent 定义旋转的匹配搜索区域。

 

信息:

模型角度范围和搜索角度范围的交点与查找相关。模型和搜索的角度范围通常选择相同!

 

信息:

如果对象具有旋转对称性,则必须相应限制角度范围,例如四边形的角度起点 = 0 和角度范围 = 90°,或者三角形的角度起点 = 0 和角度范围 = 120°。

定义所需的模型缩放比例(基于形状的匹配):ShapeModelScaleMin,ShapeModelScaleMax

适当定义搜索的缩放比例(必须是模型的子集):ShapeSearchScaleMin, ShapeSearchScaleMax

最后目视检查模型中是否包含所有相关边缘,或者是否有太多边缘。在这种情况下,请相应调整ShapeModelContrastMinShapeModelContrastMax

搜索优化

通过为 Execute(执行)定义合适的兴趣区域来加快搜索速度。

对于 NCC 和基于形状的匹配,相应图像的域(与执行的 ROI 相对应)决定了可以搜索或找到与之前定义的模型匹配的搜索区域。更准确地说,是模型参考点的搜索区域,即用于教导过程的图像域(区域)的几何中心(在 VF 中由 "教导 ROI "定义)。因此,通常可以选择比最初设想的小得多的兴趣区域。这样可以加快搜索速度。

如果潜在命中模型的几何中心位于(执行)兴趣区域之外,则无法找到该模型。举例来说,如果模型也由 > 1 个部分组成,这可能与具有相应间隙的(执行)兴趣区域有关,因为模型的几何中心可能相应地位于模型各部分之间,并有可能落入(执行)兴趣区域的某个间隙中。这样,即使模型的各个部分都在 (Execute) 区域内,也无法找到模型。

下面是一个例子:

使用正方形创建模型。几何中心就是正方形的中心。如果选择 "执行 ROI",使搜索图像中的潜在命中点(正方形)的中心不包括在 "执行 ROI "中(因此 ROI 只包括中心周围的区域),那么结果中将找不到命中点。

出于搜索目的,本例中的 ROI 可以缩小很多。它必须只覆盖方格中心(当然包括中心)附近的一小块区域,而不是方格的整个区域。这样可以加快搜索速度。这种相对于模型参考点的搜索区域定义也意味着,为了搜索成功,也允许模型部件位于 (执行)兴趣区域之外(只要在图像内部,见下文)。如上所述,与搜索相关的是模型的几何中心。如果模型的某些部分在兴趣区域之外,得分也不会改变。

减少模型的内存需求

没有必要识别完全旋转的模型。为此,可以优化参数ModelAngleStartModelAngleExtend。教导的旋转角度越小,模型的大小就越小。

搜索具有对称性的物体

限制搜索范围的另一个原因是要搜索具有旋转对称性的物体。通过限制模型的角度范围,可以减少潜在的候选结果。

对称多边形的经验法则:360°/角数 = 角度范围(例如,等边三角形的角度范围为 0° 至 120°,正方形的角度范围为 0° 至 90°,六边形的角度范围为 0° 至 60°,等等)。

信息:

与模型相比,对于 NCC 模型,绘制的兴趣区域应该非常窄;否则,搜索也将需要大量内存。

例如,使用 130 万像素传感器执行全搜索(ModelAngle 从 0 到 360)时,如果 ROI 区域大于图像大小的 1/4,几分钟后就会因内存不足而中止搜索。

加速搜索

在测试模式中通过重复搜索(TestExecute= 1)迭代调整参数。

不断提高MinScore,直到找到所需的匹配结果。

接下来,增加ShapeSearchGreediness,直到匹配不再成功。

然后降低MinScore。如果仍然无效,请返回之前的值。

信息:

激活的 TestExecute (1) 可能会导致机器上不激活的处理时间延长。

图像边框注意事项

对于这两种匹配方法,只有当模型完全在图像内时才会返回结果。同样,如果模型超出了图像边界(这里指的是实际图像边界,与设定的兴趣区域无关),则不会找到该模型。

在进行形状匹配时,必须考虑到过于靠近图像边缘的潜在匹配也可能会被排除。这是因为在金字塔层级上进行搜索时,可能会出现匹配结果超出相应金字塔层级缩放图像边缘的情况,因此会被丢弃。

使用参数ShapeSearchBorderShapeModels 可以改变基于形状的匹配行为。在这种情况下,图像外的模型点会被视为 "隐藏 "点,并导致匹配分数相应降低。

本节主题

参考点位置 X/Y 函数和移动参考点

匹配 - 对匹配等级值(分数)的影响

匹配 - 使用可变形形状模型类型


This section describes the basic approach to configuring vision function Matching for standard applications.

Requirements

Define NumSearchMax.

Set MinScore to about half.

Define MaxOverlap according to the situation (see Notes for the image border).

Define ShapeSearchBorderShapeModels according to the situation (see Notes for the image border)

Select MatchingType (NCC or form-based).

Teach-in

Define the ROI for the teach-in (area model in the image).

Information:

It is strongly recommended to use the Eraser tool (subtractive) to reduce the region to the necessary area or to remove unnecessary edges!

Define possible model rotations (with respect to the taught-in reference model) via ModelAngleStart and ModelAngleExtent.

Define the matching search area for rotations using SearchAngleStart and SearchAngleExtent.

 

Information:

The intersection of the angular ranges of the model and search is relevant for finding. The ranges are usually chosen identically for the model and search!

 

Information:

If the object has rotational symmetry, the angular range must be limited accordingly, e.g. for a quadrilateral with AngleStart = 0 and AngleExtent = 90° or for a triangle with AngleStart = 0 and AngleExtent = 120°.

Define the desired scaling for the model (shape based matching): ShapeModelScaleMin, ShapeModelScaleMax.

Appropriately define the scales for the search (must be a subset of the model): ShapeSearchScaleMin, ShapeSearchScaleMax.

Perform a final visual check whether all relevant edges are included in the model or if there are too many edges. In these cases, adjust ShapeModelContrastMin and ShapeModelContrastMax accordingly.

Search optimization

Speed up the search by defining a suitable ROI for Execute.

For NCC and shape-based Matching, the domain (corresponding to the ROI for Execute) of the respective image determines the search area in which matches of the previously defined model can be searched or found. More precisely, the search area for the reference point of the model, i.e. for the geometric center of the domain (region) of the image used for the teach-in process (defined in the VF by the "Teaching ROI"). Thus, the ROI can usually be chosen to be much smaller than initially thought. This speeds up the search.

If this geometric center of the model of a potential hit lies outside the (Execute) ROI, the model is not found. This can be relevant, for example, for (Execute) ROIs with corresponding gaps if the model also consists of > 1 part since the geometric center of the model can lie accordingly between the model parts and could possibly fall into one of the gaps of the (Execute) ROI. Then the model would not be found, even if the parts of the model were within the (Execute) ROI.

Here is an example:

Using a square to create a model. The geometric center is the center of the square. If the "Execute ROI" is selected so that the centers of the potential hits (squares) in the search image are excluded in the "Execute ROI" (and thus the ROI would only include areas around the centers), no hit would be found in the result.

For search purposes, the ROI can be narrowed down quite a bit in this example. It must only cover a small area near the centers (including centers, of course) of the squares, not the entire area of the squares. This can speed up the search. This definition of the search area with respect to the reference point of the model also means that for a successful search, model parts are also permitted to be located outside the (Execute) ROI (as long as they are inside the image, see below). Relevant for the search is, as described, the geometric center of the model. The score does not change if parts of the model are outside the ROI.

Reducing the memory requirements of models

It is not necessary to recognize models in their complete rotation. Parameters ModelAngleStart and ModelAngleExtend can be optimized for this. The smaller the taught-in rotation, the smaller the size of the model.

Searching for objects with symmetries

Another reason to limit the search range is due to searched objects with rotational symmetry. Potential candidate results are reduced by limiting the angle range of the model.

Rule of thumb for symmetrical polygons: 360° / Number of corners = Angle range (i.e. from 0° to 120 for an equilateral triangle, 0° to 90° for a square, 0° to 60° for a hexagon, etc.)

Information:

In comparison to the model, the drawn ROI should be very narrow for NCC models; otherwise, the search will also require a lot of memory.

When executing a full search (ModelAngle from 0 to 360) using a 1.3 MP sensor with an ROI area greater than a 1/4 of the image size, for example, the search can be aborted after a few minutes due to insufficient memory.

Accelerating the search

Iterative parameter adjustment via repeated search (TestExecute = 1) in the test pattern.

Keep increasing MinScore until the desired Matching result is found.

Next, increase ShapeSearchGreedinessuntil Matching is no longer successful.

Then lower MinScore. If this does not help, go back to previous values.

Information:

An active TestExecute (1) can result in higher processing times that do not occur actively on the machine.

Notes for the image border

With both Matching methods, a result is only returned if the model is entirely within the image. Likewise, a model will not be found if it would extend beyond the image border (here the actual image border is meant, independent of a set ROI).

When shape matching, it must be taken into account that potential matches that are too close to the edge of the image may also be sorted out. The reason for this is the search on pyramid levels, where it could happen that a hit extends beyond the edge of the scaled image of the corresponding pyramid level and is thus discarded.

The behavior can be changed with shape-based Matching using parameter ShapeSearchBorderShapeModels. In this case, points of the model outside the image are considered "hidden" and cause a corresponding reduction of the match score.

Topics in this section:

ReferencePositionX/Y function and moving the reference point

Matching - Influences on the grade value (Score) of Matching

Matching - Using the deformable shape model type