Shortcuts

mmhuman3d.apis

mmhuman3d.apis.collect_results_cpu(result_part, size, tmpdir=None)[source]

Collect results in cpu.

mmhuman3d.apis.collect_results_gpu(result_part, size)[source]

Collect results in gpu.

mmhuman3d.apis.feature_extract(model, img_or_path, det_results, bbox_thr=None, format='xywh')[source]

Extract image features with a list of person bounding boxes.

Parameters
  • model (nn.Module) – The loaded feature extraction model.

  • img_or_path (Union[str, np.ndarray]) – Image filename or loaded image.

  • det_results (List(dict)) – the item in the dict may contain ‘bbox’ and/or ‘track_id’. ‘bbox’ (4, ) or (5, ): The person bounding box, which contains 4 box coordinates (and score). ‘track_id’ (int): The unique id for each human instance.

  • bbox_thr (float, optional) – Threshold for bounding boxes. If bbox_thr is None, ignore it. Defaults to None.

  • format (str, optional) – bbox format. Default: ‘xywh’. ‘xyxy’ means (left, top, right, bottom), ‘xywh’ means (left, top, width, height).

Returns

The bbox & pose info,

containing the bbox: (left, top, right, bottom, [score]) and the features.

Return type

list[dict]

mmhuman3d.apis.inference_image_based_model(model, img_or_path, det_results, bbox_thr=None, format='xywh')[source]

Inference a single image with a list of person bounding boxes.

Parameters
  • model (nn.Module) – The loaded pose model.

  • img_or_path (Union[str, np.ndarray]) – Image filename or loaded image.

  • det_results (List(dict)) – the item in the dict may contain ‘bbox’ and/or ‘track_id’. ‘bbox’ (4, ) or (5, ): The person bounding box, which contains 4 box coordinates (and score). ‘track_id’ (int): The unique id for each human instance.

  • bbox_thr (float, optional) – Threshold for bounding boxes. Only bboxes with higher scores will be fed into the pose detector. If bbox_thr is None, ignore it. Defaults to None.

  • format (str, optional) – bbox format (‘xyxy’ | ‘xywh’). Default: ‘xywh’. ‘xyxy’ means (left, top, right, bottom), ‘xywh’ means (left, top, width, height).

Returns

Each item in the list is a dictionary,

containing the bbox: (left, top, right, bottom, [score]), SMPL parameters, vertices, kp3d, and camera.

Return type

list[dict]

mmhuman3d.apis.inference_video_based_model(model, extracted_results, with_track_id=True, causal=True)[source]

Inference SMPL parameters from extracted featutres using a video-based model.

Parameters
  • model (nn.Module) – The loaded mesh estimation model.

  • extracted_results (List[List[Dict]]) –

    Multi-frame feature extraction results stored in a nested list. Each element of the outer list is the feature extraction results of a single frame, and each element of the inner list is the feature information of one person, which contains:

    features (ndarray): extracted features track_id (int): unique id of each person, required when

    with_track_id==True`

    bbox ((4, ) or (5, )): left, right, top, bottom, [score]

  • with_track_id – If True, the element in extracted_results is expected to contain “track_id”, which will be used to gather the feature sequence of a person from multiple frames. Otherwise, the extracted results in each frame are expected to have a consistent number and order of identities. Default is True.

  • causal (bool) – If True, the target frame is the first frame in a sequence. Otherwise, the target frame is in the middle of a sequence.

Returns

Each item in the list is a dictionary, which contains:

SMPL parameters, vertices, kp3d, and camera.

Return type

list[dict]

mmhuman3d.apis.init_model(config, checkpoint=None, device='cuda:0')[source]

Initialize a model from config file.

Parameters
  • config (str or mmcv.Config) – Config file path or the config object.

  • checkpoint (str, optional) – Checkpoint path. If left as None, the model will not load any weights.

Returns

The constructed model. (nn.Module, None): The constructed extractor model

Return type

nn.Module

mmhuman3d.apis.multi_gpu_test(model, data_loader, tmpdir=None, gpu_collect=False)[source]

Test model with multiple gpus.

This method tests model with multiple gpus and collects the results under two different modes: gpu and cpu modes. By setting ‘gpu_collect=True’ it encodes results to gpu tensors and use gpu communication for results collection. On cpu mode it saves the results on different gpus to ‘tmpdir’ and collects them by the rank 0 worker.

Parameters
  • model (nn.Module) – Model to be tested.

  • data_loader (nn.Dataloader) – Pytorch data loader.

  • tmpdir (str) – Path of directory to save the temporary results from different gpus under cpu mode.

  • gpu_collect (bool) – Option to use either gpu or cpu to collect results.

Returns

The prediction results.

Return type

list

mmhuman3d.apis.set_random_seed(seed, deterministic=False)[source]

Set random seed.

Parameters
  • seed (int) – Seed to be used.

  • deterministic (bool) – Whether to set the deterministic option for CUDNN backend, i.e., set torch.backends.cudnn.deterministic to True and torch.backends.cudnn.benchmark to False. Default: False.

mmhuman3d.apis.single_gpu_test(model, data_loader, show=False, out_dir=None, **show_kwargs)[source]

Test with single gpu.

mmhuman3d.apis.train_model(model, dataset, cfg, distributed=False, validate=False, timestamp=None, device='cuda', meta=None)[source]

Main api for training model.

mmhuman3d.core

cameras

class mmhuman3d.core.cameras.FoVOrthographicCameras(*args: Any, **kwargs: Any)[source]

Inherited from Pytorch3D FoVOrthographicCameras.

classmethod get_default_projection_matrix(**args)torch.Tensor[source]

Class method. Calculate the projective transformation matrix by default parameters.

scale_x = 2 / (max_x - min_x)
scale_y = 2 / (max_y - min_y)
scale_z = 2 / (far-near)
mid_x = (max_x + min_x) / (max_x - min_x)
mix_y = (max_y + min_y) / (max_y - min_y)
mid_z = (far + near) / (far - near)

K = [[scale_x,        0,         0,  -mid_x],
     [0,        scale_y,         0,  -mix_y],
     [0,              0,  -scale_z,  -mid_z],
     [0,              0,         0,       1],]
Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

to_ndc(**kwargs)[source]

Not implemented.

to_ndc_(**kwargs)[source]

Not implemented.

to_screen(**kwargs)[source]

Not implemented.

to_screen_(**kwargs)[source]

Not implemented.

class mmhuman3d.core.cameras.FoVPerspectiveCameras(*args: Any, **kwargs: Any)[source]

Inherited from Pytorch3D FoVPerspectiveCameras.

classmethod get_default_projection_matrix(**args)torch.Tensor[source]

Class method. Calculate the projective transformation matrix by default parameters.

Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

to_ndc(**kwargs)[source]

Not implemented.

to_ndc_(**kwargs)[source]

Not implemented.

to_screen(**kwargs)[source]

Not implemented.

to_screen_(**kwargs)[source]

Not implemented.

class mmhuman3d.core.cameras.NewAttributeCameras(*args: Any, **kwargs: Any)[source]

Inherited from Pytorch3D CamerasBase and provide some new functions.

compute_depth_of_points(points: torch.Tensor)torch.Tensor[source]

Compute depth of points to the camera plane.

Parameters

points ([torch.Tensor]) – shape should be (batch_size, …, 3).

Returns

shape will be (batch_size, 1)

Return type

torch.Tensor

compute_normal_of_meshes(meshes: pytorch3d.structures.Meshes)torch.Tensor[source]

Compute normal of meshes in the camera view.

Parameters

points ([torch.Tensor]) – shape should be (batch_size, 3).

Returns

shape will be (batch_size, 1)

Return type

torch.Tensor

extend(N)[source]

Create new camera class which contains each input camera N times.

Parameters

N – number of new copies of each camera.

Returns

NewAttributeCameras object.

extend_(N)[source]

extend camera inplace.

get_camera_plane_normals(**kwargs)torch.Tensor[source]

Get the identity normal vector which stretchs out of the camera plane.

Could pass R to override the camera extrinsic rotation matrix. :returns: shape will be (N, 3) :rtype: torch.Tensor

classmethod get_default_projection_matrix()[source]

Class method. Calculate the projective transformation matrix by default parameters.

Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

get_image_size()[source]

Returns the image size, if provided, expected in the form of (height, width) The image size is used for conversion of projected points to screen coordinates.

to_ndc(**kwargs)[source]

Convert to ndc.

to_ndc_(**kwargs)[source]

Convert to ndc inplace.

to_screen(**kwargs)[source]

Convert to screen.

to_screen_(**kwargs)[source]

Convert to screen inplace.

class mmhuman3d.core.cameras.OrthographicCameras(*args: Any, **kwargs: Any)[source]

Inherited from Pytorch3D OrthographicCameras.

classmethod get_default_projection_matrix(**args)torch.Tensor[source]

Class method. Calculate the projective transformation matrix by default parameters.

fx = focal_length[:,0]
fy = focal_length[:,1]
px = principal_point[:,0]
py = principal_point[:,1]

K = [[fx,   0,    0,  px],
     [0,   fy,    0,  py],
     [0,    0,    1,   0],
     [0,    0,    0,   1],]
Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

class mmhuman3d.core.cameras.PerspectiveCameras(*args: Any, **kwargs: Any)[source]

Inherited from Pytorch3D PerspectiveCameras.

classmethod get_default_projection_matrix(**args)torch.Tensor[source]

Class method. Calculate the projective transformation matrix by default parameters.

Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

class mmhuman3d.core.cameras.WeakPerspectiveCameras(*args: Any, **kwargs: Any)[source]

Inherited from [Pytorch3D cameras](https://github.com/facebookresearch/ pytorch3d/blob/main/pytorch3d/renderer/cameras.py) and mimiced the code style. And re-inmplemented functions: compute_projection_matrix, get_projection_transform, unproject_points, is_perspective, in_ndc for render.

K modified from [VIBE](https://github.com/mkocabas/VIBE/blob/master/ lib/utils/renderer.py) and changed to opencv convention. Original license please see docs/additional_license/md.

This intrinsic matrix is orthographics indeed, but could serve as weakperspective for single smpl mesh.

compute_projection_matrix(scale_x, scale_y, transl_x, transl_y, aspect_ratio)torch.Tensor[source]

Compute the calibration matrix K of shape (N, 4, 4)

Parameters
  • scale_x (Union[torch.Tensor, float], optional) – Scale in x direction.

  • scale_y (Union[torch.Tensor, float], optional) – Scale in y direction.

  • transl_x (Union[torch.Tensor, float], optional) – Translation in x direction.

  • transl_y (Union[torch.Tensor, float], optional) – Translation in y direction.

  • aspect_ratio (Union[torch.Tensor, float], optional) – aspect ratio of the image pixels. 1.0 indicates square pixels.

Returns

torch.FloatTensor of the calibration matrix with shape (N, 4, 4)

static convert_K_to_orig_cam(K: torch.Tensor, aspect_ratio: Union[torch.Tensor, float] = 1.0)Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]

Compute intrinsic camera matrix from pred camera parameter of smpl.

Parameters
  • K (torch.Tensor) – opencv orthographics intrinsic matrix: (N, 4, 4)

  • code-block: (.) –

    python: K = [[sx*r, 0, 0, tx*sx*r],

    [0, sy, 0, ty*sy], [0, 0, 1, 0], [0, 0, 0, 1],]

  • aspect_ratio (Union[torch.Tensor, float], optional) – aspect ratio of the image pixels. 1.0 indicates square pixels. Defaults to 1.0.

Returns

shape should be (N, 4).

Return type

orig_cam (torch.Tensor)

static convert_orig_cam_to_matrix(orig_cam: torch.Tensor, **kwargs)Tuple[torch.Tensor, torch.Tensor, torch.Tensor][source]

Compute intrinsic camera matrix from orig_cam parameter of smpl.

r > 1::

    K = [[sx*r,   0,    0,   tx*sx*r],
         [0,     sy,    0,     ty*sy],
         [0,      0,    1,         0],
         [0,      0,    0,         1]]

or r < 1::

    K = [[sx,    0,     0,   tx*sx],
         [0,   sy/r,    0,  ty*sy/r],
         [0,     0,     1,      0],
         [0,     0,     0,      1],]

rotation matrix: (N, 3, 3)::

    [[1, 0, 0],
     [0, 1, 0],
     [0, 0, 1]]

translation matrix: (N, 3)::

    [0, 0, -znear]
Parameters
  • orig_cam (torch.Tensor) – shape should be (N, 4).

  • znear (Union[torch.Tensor, float], optional) – near clipping plane of the view frustrum. Defaults to 0.0.

  • aspect_ratio (Union[torch.Tensor, float], optional) – aspect ratio of the image pixels. 1.0 indicates square pixels. Defaults to 1.0.

Returns

opencv intrinsic matrix: (N, 4, 4)

Return type

Tuple[torch.Tensor, torch.Tensor, torch.Tensor]

classmethod get_default_projection_matrix(**args)[source]

Class method. Calculate the projective transformation matrix by default parameters.

Parameters

**kwargs – parameters for the projection can be passed in as keyword arguments to override the default values set in __init__.

Returns

a torch.Tensor which represents a batch of projection matrices K of shape (N, 4, 4)

get_projection_transform(**kwargs)pytorch3d.transforms.Transform3d[source]

Calculate the orthographic projection matrix. Use column major order.

Parameters

**kwargs – parameters for the projection can be passed in to override the default values set in __init__.

Returns

a Transform3d object which represents a batch of projection

matrices of shape (N, 4, 4)

in_ndc()[source]

Boolean of whether in NDC.

is_perspective()[source]

Boolean of whether is perspective.

to_ndc(**kwargs)[source]

Not implemented.

to_ndc_(**kwargs)[source]

Not implemented.

to_screen(**kwargs)[source]

Not implemented.

to_screen_(**kwargs)[source]

Not implemented.

unproject_points(xy_depth: torch.Tensor, world_coordinates: bool = True, **kwargs)torch.Tensor[source]

Sends points from camera coordinates (NDC or screen) back to camera view or world coordinates depending on the world_coordinates boolean argument of the function.

mmhuman3d.core.cameras.build_cameras(cfg)[source]

Build cameras.

mmhuman3d.core.cameras.compute_orbit_cameras(elev: float = 0, azim: float = 0, dist: float = 2.7, at: Union[torch.Tensor, List, Tuple] = (0, 0, 0), batch_size: int = 1, orbit_speed: Union[float, Tuple[float, float]] = 0, dist_speed: Optional[float] = 0, convention: str = 'pytorch3d')[source]

Generate a sequence of moving cameras following an orbit.

Parameters
  • elev (float, optional) – This is the angle between the vector from the object to the camera, and the horizontal plane y = 0 (xz-plane). Defaults to 0.

  • azim (float, optional) – angle in degrees or radians. The vector from the object to the camera is projected onto a horizontal plane y = 0. azim is the angle between the projected vector and a reference vector at (0, 0, 1) on the reference plane (the horizontal plane). Defaults to 0.

  • dist (float, optional) – distance of the camera from the object. Defaults to 2.7.

  • at (Union[torch.Tensor, List, Tuple], optional) – the position of the object(s) in world coordinates. Defaults to (0, 0, 0).

  • batch_size (int, optional) – batch size. Defaults to 1.

  • orbit_speed (Union[float, Tuple[float, float]], optional) – degree speed of camera moving along the orbit. Could be one or two number. One number for only elev speed, two number for both. Defaults to 0.

  • dist_speed (Optional[float], optional) – speed of camera moving along the center line. Defaults to 0.

  • convention (str, optional) – Camera convention. Defaults to ‘pytorch3d’.

Returns

computed K, R, T.

Return type

Union[torch.Tensor, torch.Tensor, torch.Tensor]

conventions

class mmhuman3d.core.conventions.body_segmentation(model_type='smpl')[source]

SMPL(X) body mesh vertex segmentation.

mmhuman3d.core.conventions.compress_converted_kps(zero_pad_array: Union[numpy.ndarray, torch.Tensor], mask_array: Union[numpy.ndarray, torch.Tensor])Union[numpy.ndarray, torch.Tensor][source]

Compress keypoints that are zero-padded after applying convert_kps.

Parameters
  • keypoints (np.ndarray) – input keypoints array, could be (f * n * J * 3/2) or (f * J * 3/2). You can set keypoints as np.zeros((1, J, 2)) if you only need mask.

  • [Union[np.ndarray (mask) – The original mask to mark the existence of the keypoints.

  • torch.Tensor]] – The original mask to mark the existence of the keypoints.

Returns

out_keypoints

Return type

Union[np.ndarray, torch.Tensor]

mmhuman3d.core.conventions.convert_K_3x3_to_4x4(K: Union[torch.Tensor, numpy.ndarray], is_perspective: bool = True)Union[torch.Tensor, numpy.ndarray][source]

Convert opencv 3x3 intrinsic matrix to 4x4.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) –

    Input 3x3 intrinsic matrix, left mm defined. [[fx, 0, px],

    [0, fy, py], [0, 0, 1]]

  • is_perspective (bool, optional) – whether is perspective projection. Defaults to True.

Raises
  • TypeError – K is not Tensor or array.

  • ValueError – Shape is not (batch, 3, 3) or (3, 3)

Returns

Output intrinsic matrix. for perspective:

[[fx, 0, px, 0], [0, fy, py, 0], [0, 0, 0, 1], [0, 0, 1, 0]]

for orthographics:

[[fx, 0, 0, px], [0, fy, 0, py], [0, 0, 1, 0], [0, 0, 0, 1]]

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_K_4x4_to_3x3(K: Union[torch.Tensor, numpy.ndarray], is_perspective: bool = True)Union[torch.Tensor, numpy.ndarray][source]

Convert opencv 4x4 intrinsic matrix to 3x3.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) –

    Input 4x4 intrinsic matrix, left mm defined. for perspective:

    [[fx, 0, px, 0], [0, fy, py, 0], [0, 0, 0, 1], [0, 0, 1, 0]]

    for orthographics:

    [[fx, 0, 0, px], [0, fy, 0, py], [0, 0, 1, 0], [0, 0, 0, 1]]

  • is_perspective (bool, optional) – whether is perspective projection. Defaults to True.

Raises
  • TypeError – type K should be Tensor or array.

  • ValueError – Shape is not (batch, 3, 3) or (3, 3).

Returns

Output 3x3 intrinsic matrix, left mm defined. [[fx, 0, px],

[0, fy, py], [0, 0, 1]]

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_cameras(K: Optional[Union[numpy.ndarray, torch.Tensor]] = None, R: Optional[Union[numpy.ndarray, torch.Tensor]] = None, T: Optional[Union[numpy.ndarray, torch.Tensor]] = None, is_perspective: bool = True, convention_src: str = 'opencv', convention_dst: str = 'pytorch3d', in_ndc_src: bool = True, in_ndc_dst: bool = True, resolution_src: Optional[Union[int, Tuple[int, int], torch.Tensor, numpy.ndarray]] = None, resolution_dst: Optional[Union[int, Tuple[int, int], torch.Tensor, numpy.ndarray]] = None, camera_conventions: dict = {'blender': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'maya': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'open3d': {'axis': 'x-yz', 'left_mm_extrinsic': False, 'left_mm_intrinsic': False, 'view_to_world': False}, 'opencv': {'axis': 'x-yz', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': True}, 'opengl': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'pyrender': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'pytorch3d': {'axis': '-xyz', 'left_mm_extrinsic': False, 'left_mm_intrinsic': True, 'view_to_world': False}, 'unity': {'axis': 'xyz', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}})Tuple[Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]][source]

Convert the intrinsic matrix K and extrinsic matrix [R|T] from source convention to destination convention.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) – Intrinsic matrix, shape should be (batch_size, 4, 4) or (batch_size, 3, 3). Will be ignored if None.

  • R (Optional[Union[torch.Tensor, np.ndarray]], optional) – Extrinsic rotation matrix. Shape should be (batch_size, 3, 3). Will be identity if None. Defaults to None.

  • T (Optional[Union[torch.Tensor, np.ndarray]], optional) – Extrinsic translation matrix. Shape should be (batch_size, 3). Will be zeros if None. Defaults to None.

  • is_perspective (bool, optional) – whether is perspective projection. Defaults to True.

  • _____________________________________________________________________

  • Camera dependent args (#) –

  • convention_src (str, optional) – convention of source camera,

  • convention_dst (str, optional) – convention of destination camera,

  • define the convention of cameras by the order of right (We) –

  • and (front) –

  • up.

  • E.g.

    ‘+x+z+y’. ‘+’ could be ignored. The second one is opencv and its convention should be ‘+x-z-y’. The third one is pytorch3d and its convention should be ‘-xzy’.

    opengl(pyrender) opencv pytorch3d y z y | / | | / | |_______x /________x x________ | / | /

    / | /

    z / y | z /

  • first one is pyrender and its convention should be (the) –

    ‘+x+z+y’. ‘+’ could be ignored. The second one is opencv and its convention should be ‘+x-z-y’. The third one is pytorch3d and its convention should be ‘-xzy’.

    opengl(pyrender) opencv pytorch3d y z y | / | | / | |_______x /________x x________ | / | /

    / | /

    z / y | z /

  • in_ndc_src (bool, optional) – Whether is the source camera defined in ndc. Defaults to True.

  • in_ndc_dst (bool, optional) – Whether is the destination camera defined in ndc. Defaults to True.

  • camera_convention (in) –

    1). left_mm_ex means extrinsic matrix K is left matrix

    multiplcation defined.

    2). left_mm_in means intrinsic matrix [R`| `T] is left

    matrix multiplcation defined.

    1. view_to_world means extrinsic matrix [R`| `T] is defined

      as view to world.

  • define these args as (we) –

    1). left_mm_ex means extrinsic matrix K is left matrix

    multiplcation defined.

    2). left_mm_in means intrinsic matrix [R`| `T] is left

    matrix multiplcation defined.

    1. view_to_world means extrinsic matrix [R`| `T] is defined

      as view to world.

  • (Optional[Union[int (resolution_src) – np.ndarray]], optional): Source camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • Tuple[int – np.ndarray]], optional): Source camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • int] – np.ndarray]], optional): Source camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • torch.Tensor – np.ndarray]], optional): Source camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

:paramnp.ndarray]], optional):

Source camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

Parameters
  • (Optional[Union[int (resolution_dst) – np.ndarray]], optional): Destination camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • Tuple[int – np.ndarray]], optional): Destination camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • int] – np.ndarray]], optional): Destination camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

  • torch.Tensor – np.ndarray]], optional): Destination camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

:paramnp.ndarray]], optional):

Destination camera image size of (height, width). Required if defined in screen. Will be square if int. Shape should be (2,) if array or tensor. Defaults to None.

Parameters

camera_conventions – (dict, optional): dict containing pre-defined camera convention information. Defaults to CAMERA_CONVENTIONS.

Raises

TypeError – K, R, T should all be torch.Tensor or np.ndarray.

Returns

Tuple[Union[torch.Tensor, None], Union[torch.Tensor, None],

Union[torch.Tensor, None]]: Converted K, R, T matrix of tensor.

mmhuman3d.core.conventions.convert_kps(keypoints: Union[numpy.ndarray, torch.Tensor], src: str, dst: str, approximate: bool = False, mask: Optional[Union[numpy.ndarray, torch.Tensor]] = None, keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})Tuple[Union[numpy.ndarray, torch.Tensor], Union[numpy.ndarray, torch.Tensor]][source]

Convert keypoints following the mapping correspondence between src and dst keypoints definition. Supported conventions by now: agora, coco, smplx, smpl, mpi_inf_3dhp, mpi_inf_3dhp_test, h36m, h36m_mmpose, pw3d, mpii, lsp. :param keypoints [Union[np.ndarray: input keypoints array,

could be (f * n * J * 3/2) or (f * J * 3/2). You can set keypoints as np.zeros((1, J, 2)) if you only need mask.

Parameters
  • torch.Tensor]] – input keypoints array, could be (f * n * J * 3/2) or (f * J * 3/2). You can set keypoints as np.zeros((1, J, 2)) if you only need mask.

  • src (str) – source data type from keypoints_factory.

  • dst (str) – destination data type from keypoints_factory.

  • approximate (bool) – control whether approximate mapping is allowed.

  • mask (Optional[Union[np.ndarray, torch.Tensor]], optional) – The original mask to mark the existence of the keypoints. None represents all ones mask. Defaults to None.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

Tuple[Union[np.ndarray, torch.Tensor], Union[np.ndarray, torch.Tensor]]

: tuple of (out_keypoints, mask). out_keypoints and mask will be of the same type.

mmhuman3d.core.conventions.convert_ndc_to_screen(K: Union[torch.Tensor, numpy.ndarray], resolution: Union[int, Tuple[int, int], List[int], torch.Tensor, numpy.ndarray], sign: Optional[Iterable[int]] = None, is_perspective: bool = True)Union[torch.Tensor, numpy.ndarray][source]

Convert intrinsic matrix from ndc to screen.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) – Input 4x4 intrinsic matrix, left mm defined.

  • resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray]) – (height, width) of image.

  • sign (Optional[Union[Iterable[int]]], optional) – xyz axis sign. Defaults to None.

  • is_perspective (bool, optional) – whether is perspective projection. Defaults to True.

Raises
  • TypeError – K should be Tensor or array.

  • ValueError – shape of K should be (batch, 4, 4)

Returns

output intrinsic matrix.

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_perspective_to_weakperspective(K: Union[torch.Tensor, numpy.ndarray], zmean: Union[torch.Tensor, numpy.ndarray, float, int], resolution: Optional[Union[int, Tuple[int, int], torch.Tensor, numpy.ndarray]] = None, in_ndc: bool = False, convention: str = 'opencv')Union[torch.Tensor, numpy.ndarray][source]

Convert perspective to weakperspective intrinsic matrix.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) – input intrinsic matrix, shape should be (batch, 4, 4) or (batch, 3, 3).

  • zmean (Union[torch.Tensor, np.ndarray, int, float]) – zmean for object. shape should be (batch, ) or singleton number.

  • (Union[int (resolution) – optional): (height, width) of image. Defaults to None.

  • Tuple[int – optional): (height, width) of image. Defaults to None.

  • int] – optional): (height, width) of image. Defaults to None.

  • torch.Tensor – optional): (height, width) of image. Defaults to None.

  • np.ndarray] – optional): (height, width) of image. Defaults to None.

:param : optional): (height, width) of image. Defaults to None. :param in_ndc: whether defined in ndc. Defaults to False. :type in_ndc: bool, optional :param convention: camera convention. Defaults to ‘opencv’. :type convention: str, optional

Returns

output weakperspective pred_cam,

shape is (batch, 4)

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_screen_to_ndc(K: Union[torch.Tensor, numpy.ndarray], resolution: Union[int, Tuple[int, int], torch.Tensor, numpy.ndarray], sign: Optional[Iterable[int]] = None, is_perspective: bool = True)Union[torch.Tensor, numpy.ndarray][source]

Convert intrinsic matrix from screen to ndc.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) – input intrinsic matrix.

  • resolution (Union[int, Tuple[int, int], torch.Tensor, np.ndarray]) – (height, width) of image.

  • sign (Optional[Union[Iterable[int]]], optional) – xyz axis sign. Defaults to None.

  • is_perspective (bool, optional) – whether is perspective projection. Defaults to True.

Raises
  • TypeError – K should be Tensor or array.

  • ValueError – shape of K should be (batch, 4, 4)

Returns

output intrinsic matrix.

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_weakperspective_to_perspective(K: Union[torch.Tensor, numpy.ndarray], zmean: Union[torch.Tensor, numpy.ndarray, int, float], resolution: Optional[Union[int, Tuple[int, int], torch.Tensor, numpy.ndarray]] = None, in_ndc: bool = False, convention: str = 'opencv')Union[torch.Tensor, numpy.ndarray][source]

Convert perspective to weakperspective intrinsic matrix.

Parameters
  • K (Union[torch.Tensor, np.ndarray]) – input intrinsic matrix, shape should be (batch, 4, 4) or (batch, 3, 3).

  • zmean (Union[torch.Tensor, np.ndarray, int, float]) – zmean for object. shape should be (batch, ) or singleton number.

  • (Union[int (resolution) – optional): (height, width) of image. Defaults to None.

  • Tuple[int – optional): (height, width) of image. Defaults to None.

  • int] – optional): (height, width) of image. Defaults to None.

  • torch.Tensor – optional): (height, width) of image. Defaults to None.

  • np.ndarray] – optional): (height, width) of image. Defaults to None.

:param : optional): (height, width) of image. Defaults to None. :param in_ndc: whether defined in ndc. Defaults to False. :type in_ndc: bool, optional :param convention: camera convention. Defaults to ‘opencv’. :type convention: str, optional

Returns

output weakperspective pred_cam,

shape is (batch, 4)

Return type

Union[torch.Tensor, np.ndarray]

mmhuman3d.core.conventions.convert_world_view(R: Union[torch.Tensor, numpy.ndarray], T: Union[torch.Tensor, numpy.ndarray])Tuple[Union[numpy.ndarray, torch.Tensor], Union[numpy.ndarray, torch.Tensor]][source]

Convert between view_to_world and world_to_view defined extrinsic matrix.

Parameters
  • R (Union[torch.Tensor, np.ndarray]) – extrinsic rotation matrix. shape should be (batch, 3, 4)

  • T (Union[torch.Tensor, np.ndarray]) – extrinsic translation matrix.

Raises

TypeError – R and T should be of the same type.

Returns

Tuple[Union[torch.Tensor, np.ndarray], Union[torch.Tensor,

np.ndarray]]: output R, T.

mmhuman3d.core.conventions.enc_camera_convention(convention, camera_conventions={'blender': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'maya': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'open3d': {'axis': 'x-yz', 'left_mm_extrinsic': False, 'left_mm_intrinsic': False, 'view_to_world': False}, 'opencv': {'axis': 'x-yz', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': True}, 'opengl': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'pyrender': {'axis': 'xy-z', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}, 'pytorch3d': {'axis': '-xyz', 'left_mm_extrinsic': False, 'left_mm_intrinsic': True, 'view_to_world': False}, 'unity': {'axis': 'xyz', 'left_mm_extrinsic': True, 'left_mm_intrinsic': True, 'view_to_world': False}})[source]

convert camera convention to axis direction and order.

mmhuman3d.core.conventions.get_flip_pairs(convention: str = 'smplx', keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})List[int][source]

Get indices of left, right keypoint pairs from specified convention.

Parameters
  • convention (str) – data type from keypoints_factory.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

left, right keypoint indices

Return type

List[int]

mmhuman3d.core.conventions.get_keypoint_idx(name: str, convention: str = 'smplx', approximate: bool = False, keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})List[int][source]

Get keypoint index from specified convention with keypoint name.

Parameters
  • name (str) – keypoint name

  • convention (str) – data type from keypoints_factory.

  • approximate (bool) – control whether approximate mapping is allowed.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

keypoint index

Return type

List[int]

mmhuman3d.core.conventions.get_keypoint_idxs_by_part(part: str, convention: str = 'smplx', keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})List[int][source]

Get part keypoints indices from specified part and convention.

Parameters
  • part (str) – part to search from

  • convention (str) – data type from keypoints_factory.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

part keypoint indices

Return type

List[int]

mmhuman3d.core.conventions.get_keypoint_num(convention: str = 'smplx', keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})List[int][source]

Get number of keypoints of specified convention.

Parameters
  • convention (str) – data type from keypoints_factory.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

part keypoint indices

Return type

List[int]

mmhuman3d.core.conventions.get_mapping(src: str, dst: str, approximate: bool = False, keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})[source]

Get mapping list from src to dst.

Parameters
  • src (str) – source data type from keypoints_factory.

  • dst (str) – destination data type from keypoints_factory.

  • approximate (bool) – control whether approximate mapping is allowed.

  • keypoints_factory (dict, optional) – A class to store the attributes. Defaults to keypoints_factory.

Returns

[src_to_intersection_idx, dst_to_intersection_index,

intersection_names]

Return type

list

evaluation

mmhuman3d.core.evaluation.compute_similarity_transform(source_points, target_points)[source]

Computes a similarity transform (sR, t) that takes a set of 3D points source_points (N x 3) closest to a set of 3D points target_points, where R is an 3x3 rotation matrix, t 3x1 translation, s scale.

And return the transformed 3D points source_points_hat (N x 3). i.e. solves the orthogonal Procrutes problem. .. rubric:: Notes

Points number: N

Parameters
  • source_points (np.ndarray([N, 3])) – Source point set.

  • target_points (np.ndarray([N, 3])) – Target point set.

Returns

Transformed source point set.

Return type

source_points_hat (np.ndarray([N, 3]))

mmhuman3d.core.evaluation.keypoint_mpjpe(pred, gt, mask, alignment='none')[source]

Calculate the mean per-joint position error (MPJPE) and the error after rigid alignment with the ground truth (P-MPJPE). batch_size: N num_keypoints: K keypoint_dims: C :param pred: Predicted keypoint location. :type pred: np.ndarray[N, K, C] :param gt: Groundtruth keypoint location. :type gt: np.ndarray[N, K, C] :param mask: Visibility of the target. False for invisible

joints, and True for visible. Invisible joints will be ignored for accuracy calculation.

Parameters

alignment (str, optional) –

method to align the prediction with the groundtruth. Supported options are: - 'none': no alignment will be applied - 'scale': align in the least-square sense in scale - 'procrustes': align in the least-square sense in scale,

rotation and translation.

Returns

A tuple containing joint position errors - mpjpe (float|np.ndarray[N]): mean per-joint position error. - p-mpjpe (float|np.ndarray[N]): mpjpe after rigid alignment with the

ground truth

Return type

tuple

filter

class mmhuman3d.core.filter.Gaus1dFilter(window_size=11, sigma=4)[source]

Applies median filter and then gaussian filter. code from: https://github.com/akanazawa/human_dynamics/blob/mas ter/src/util/smooth_bbox.py.

Parameters
  • x (np.ndarray) – input pose

  • window_size (int, optional) – for median filters (must be odd).

  • sigma (float, optional) – Sigma for gaussian smoothing.

Returns

Smoothed poses

Return type

np.ndarray

class mmhuman3d.core.filter.OneEuroFilter(min_cutoff=0.004, beta=0.7)[source]

Oneeuro filter, source code: https://github.com/mkocabas/VIBE/blob/c0 c3f77d587351c806e901221a9dc05d1ffade4b/lib/utils/smooth_pose.py.

Parameters
  • min_cutoff (float, optional) –

  • the minimum cutoff frequency decreases slow speed jitter (Decreasing) –

  • beta (float, optional) –

  • the speed coefficient (Increasing) –

Returns

smoothed poses

Return type

np.ndarray

class mmhuman3d.core.filter.SGFilter(window_size=11, polyorder=2)[source]

savgol_filter lib is from: https://docs.scipy.org/doc/scipy/reference/generated/ scipy.signal.savgol_filter.html.

Parameters
  • window_size (float) – The length of the filter window (i.e., the number of coefficients). window_length must be a positive odd integer.

  • polyorder (int) – The order of the polynomial used to fit the samples. polyorder must be less than window_length.

Returns

smoothed poses (np.ndarray, torch.tensor)

mmhuman3d.core.filter.build_filter(cfg)[source]

Build filters function.

optimizer

mmhuman3d.core.optimizer.build_optimizers(model, cfgs)[source]

Build multiple optimizers from configs. If cfgs contains several dicts for optimizers, then a dict for each constructed optimizers will be returned. If cfgs only contains one optimizer config, the constructed optimizer itself will be returned. For example,

  1. Multiple optimizer configs:

optimizer_cfg = dict(
    model1=dict(type='SGD', lr=lr),
    model2=dict(type='SGD', lr=lr))

The return dict is dict('model1': torch.optim.Optimizer, 'model2': torch.optim.Optimizer)

  1. Single optimizer config:

optimizer_cfg = dict(type='SGD', lr=lr)

The return is torch.optim.Optimizer.

Parameters
  • model (nn.Module) – The model with parameters to be optimized.

  • cfgs (dict) – The config dict of the optimizer.

Returns

The initialized optimizers.

Return type

dict[torch.optim.Optimizer] | torch.optim.Optimizer

parametric_model

visualization

mmhuman3d.core.visualization.render_smpl(poses: Optional[Union[torch.Tensor, numpy.ndarray, dict]] = None, betas: Optional[Union[numpy.ndarray, torch.Tensor]] = None, transl: Optional[Union[numpy.ndarray, torch.Tensor]] = None, verts: Optional[Union[numpy.ndarray, torch.Tensor]] = None, model_type: typing_extensions.Literal[smpl, smplx] = 'smpl', body_model: Optional[torch.nn.modules.module.Module] = None, body_model_config: Optional[dict] = None, R: Optional[Union[numpy.ndarray, torch.Tensor]] = None, T: Optional[Union[numpy.ndarray, torch.Tensor]] = None, K: Optional[Union[numpy.ndarray, torch.Tensor]] = None, orig_cam: Optional[Union[numpy.ndarray, torch.Tensor]] = None, Ks: Optional[Union[numpy.ndarray, torch.Tensor]] = None, in_ndc: bool = True, convention: str = 'pytorch3d', projection: typing_extensions.Literal[weakperspective, perspective, fovperspective, orthographics, fovorthographics] = 'perspective', orbit_speed: Union[float, Tuple[float, float]] = 0.0, render_choice: typing_extensions.Literal[lq, mq, hq, silhouette, depth, normal, pointcloud, part_silhouette] = 'hq', palette: Union[List[str], str, numpy.ndarray] = 'white', resolution: Optional[Union[List[int], Tuple[int, int]]] = None, start: int = 0, end: Optional[int] = None, alpha: float = 1.0, no_grad: bool = True, batch_size: int = 10, device: Union[torch.device, str] = 'cuda', return_tensor: bool = False, output_path: Optional[str] = None, origin_frames: Optional[str] = None, frame_list: Optional[List[str]] = None, image_array: Optional[Union[numpy.ndarray, torch.Tensor]] = None, img_format: str = 'frame_%06d.jpg', overwrite: bool = False, mesh_file_path: Optional[str] = None, read_frames_batch: bool = False, plot_kps: bool = False, kp3d: Optional[Union[numpy.ndarray, torch.Tensor]] = None, mask: Optional[Union[numpy.ndarray, List[int]]] = None, vis_kp_index: bool = False)Union[None, torch.Tensor][source]

Render SMPL or SMPL-X mesh or silhouette into differentiable tensors, and export video or images.

Parameters
  • smpl parameters (#) –

  • poses (Union[torch.Tensor, np.ndarray, dict]) –

    1). tensor or array and ndim is 2, shape should be (frame, 72).

    2). tensor or array and ndim is 3, shape should be (frame, num_person, 72/165). num_person equals 1 means single-person. Rendering predicted multi-person should feed together with multi-person weakperspective cameras. meshes would be computed and use an identity intrinsic matrix.

    3). dict, standard dict format defined in smplx.body_models. will be treated as single-person.

    Lower priority than verts.

    Defaults to None.

  • betas (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    1). ndim is 2, shape should be (frame, 10).

    2). ndim is 3, shape should be (frame, num_person, 10). num_person equals 1 means single-person. If poses are multi-person, betas should be set to the same person number.

    None will use default betas.

    Defaults to None.

  • transl (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    translations of smpl(x).

    1). ndim is 2, shape should be (frame, 3).

    2). ndim is 3, shape should be (frame, num_person, 3). num_person equals 1 means single-person. If poses are multi-person, transl should be set to the same person number.

    Defaults to None.

  • verts (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    1). ndim is 3, shape should be (frame, num_verts, 3).

    2). ndim is 4, shape should be (frame, num_person, num_verts, 3). num_person equals 1 means single-person.

    Higher priority over poses & betas & transl.

    Defaults to None.

  • model_type (Literal[, optional) –

    choose in ‘smpl’ or ‘smplx’.

    Defaults to ‘smpl’.

    Defaults to None.

  • body_model (nn.Module, optional) –

    body_model created from smplx.create. Higher priority than body_model_config. Should not both be None.

    Defaults to None.

  • body_model_config (dict, optional) – body_model_config for build_model. Lower priority than body_model. Should not both be None. Defaults to None.

  • camera parameters (#) –

  • K (Optional[Union[torch.Tensor, np.ndarray]], optional) – shape should be (frame, 4, 4) or (frame, 3, 3), frame could be 1. if (4, 4) or (3, 3), dim 0 will be added automatically. Will be default FovPerspectiveCameras intrinsic if None. Lower priority than orig_cam.

  • R (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    shape should be (frame, 3, 3), If f equals 1, camera will have identical rotation. If K and orig_cam is None, will be generated by look_at_view. If have K or orig_cam and R is None, will be generated by convert_cameras.

    Defaults to None.

  • T (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    shape should be (frame, 3). If f equals 1, camera will have identical translation. If K and orig_cam is None, will be generated by look_at_view. If have K or orig_cam and T is None, will be generated by convert_cameras.

    Defaults to None.

  • orig_cam (Optional[Union[torch.Tensor, np.ndarray]], optional) –

    shape should be (frame, 4) or (frame, num_person, 4). If f equals 1, will be repeated to num_frames. num_person should be 1 if single person. Usually for HMR, VIBE predicted cameras. Higher priority than K & R & T.

    Defaults to None.

  • Ks (Optional[Union[torch.Tensor, np.ndarray]], optional) – shape should be (frame, 4, 4). This is for HMR or SPIN multi-person demo.

  • in_ndc (bool, optional) – . Defaults to True.

  • convention (str, optional) –

    If want to use an existing convention, choose in [‘opengl’, ‘opencv’, ‘pytorch3d’, ‘pyrender’, ‘open3d’, ‘maya’, ‘blender’, ‘unity’]. If want to use a new convention, define your convention in (CAMERA_CONVENTION_FACTORY)[mmhuman3d/core/conventions/cameras/ __init__.py] by the order of right, front and up.

    Defaults to ‘pytorch3d’.

  • projection (Literal[, optional) – projection mode of camers. Choose in [‘orthographics, fovperspective’, ‘perspective’, ‘weakperspective’, ‘fovorthographics’] Defaults to ‘perspective’.

  • orbit_speed (float, optional) – orbit speed for viewing when no K provided. float for only azim speed and Tuple for azim and elev.

  • render choice parameters (#) –

  • render_choice (Literal[, optional) –

    choose in [‘lq’, ‘mq’, ‘hq’, ‘silhouette’, ‘depth’, ‘normal’, ‘pointcloud’, ‘part_silhouette’] .

    lq, mq, hq would output (frame, h, w, 4) tensor.

    lq means low quality, mq means medium quality, h`q means high quality.

    silhouette would output (frame, h, w) binary tensor.

    part_silhouette would output (frame, h, w, n_class) tensor.

    n_class is the body segmentation classes.

    depth will output a depth map of (frame, h, w, 1) tensor and ‘normal’ will output a normal map of (frame, h, w, 1).

    pointcloud will output a (frame, h, w, 4) tensor.

    Defaults to ‘mq’.

  • palette (Union[List[str], str, np.ndarray], optional) –

    color theme str or list of color str or array.

    1). If use str to represent the color, should choose in [‘segmentation’, ‘random’] or color from Colormap https://en.wikipedia.org/wiki/X11_color_names. If choose ‘segmentation’, will get a color for each part.

    2). If you have multi-person, better give a list of str or all will be in the same color.

    3). If you want to define your specific color, use an array of shape (3,) for single person and (N, 3) for multiple persons.

    If (3,) for multiple persons, all will be in the same color.

    Your array should be in range [0, 255] for 8 bit color.

    Defaults to ‘white’.

  • resolution (Union[Iterable[int], int], optional) –

    1). If iterable, should be (height, width) of output images.

    2). If int, would be taken as (resolution, resolution).

    Defaults to (1024, 1024).

    This will influence the overlay results when render with backgrounds. The output video will be rendered following the size of background images and finally resized to resolution.

  • start (int, optional) – start frame index. Defaults to 0.

  • end (int, optional) –

    end frame index. Exclusive.

    Could be positive int or negative int or None. None represents include all the frames.

    Defaults to None.

  • alpha (float, optional) –

    Transparency of the mesh. Range in [0.0, 1.0]

    Defaults to 1.0.

  • no_grad (bool, optional) –

    Set to True if do not need differentiable render.

    Defaults to False.

  • batch_size (int, optional) –

    Batch size for render. Related to your gpu memory.

    Defaults to 10.

  • file io parameters (#) –

  • return_tensor (bool, optional) –

    Whether return the result tensors.

    Defaults to False, will return None.

  • output_path (str, optional) –

    output video or gif or image folder.

    Defaults to None, pass export procedure.

  • background frames (#) – image_array > frame_list > origin_frames

  • priority – image_array > frame_list > origin_frames

  • origin_frames (Optional[str], optional) –

    origin background frame path, could be .mp4, `.gif`(will be sliced into a folder) or an image folder.

    Defaults to None.

  • frame_list (Optional[List[str]], optional) –

    list of origin background frame paths, element in list each should be a image path like *.jpg or *.png. Use this when your file names is hard to sort or you only want to render a small number frames.

    Defaults to None.

  • image_array – (Optional[Union[np.ndarray, torch.Tensor]], optional): origin background frame tensor or array, use this when you want your frames in memory as array or tensor.

  • overwrite (bool, optional) –

    whether overwriting the existing files.

    Defaults to False.

  • mesh_file_path (bool, optional) –

    the directory path to store the .ply or ‘.ply’ files. Will be named like ‘frame_idx_person_idx.ply’.

    Defaults to None.

  • read_frames_batch (bool, optional) –

    Whether read frames by batch. Set it as True if your video is large in size.

    Defaults to False.

  • visualize keypoints (#) –

  • plot_kps (bool, optional) –

    whether plot keypoints on the output video.

    Defaults to False.

  • kp3d (Optional[Union[np.ndarray, torch.Tensor]], optional) –

    the keypoints of any convention, should pass mask if have any none-sense points. Shape should be (frame, )

    Defaults to None.

  • mask (Optional[Union[np.ndarray, List[int]]], optional) –

    Mask of keypoints existence.

    Defaults to None.

  • vis_kp_index (bool, optional) –

    Whether plot keypoint index number on human mesh.

    Defaults to False.

Returns

return the rendered image tensors or None.

Return type

Union[None, torch.Tensor]

mmhuman3d.core.visualization.visualize_T_pose(num_frames, orbit_speed=1.0, model_type='smpl', **kwargs)None[source]

Simplest way to visualize a sequence of T pose.

mmhuman3d.core.visualization.visualize_kp2d(kp2d: numpy.ndarray, output_path: Optional[str] = None, frame_list: Optional[List[str]] = None, origin_frames: Optional[str] = None, image_array: Optional[numpy.ndarray] = None, limbs: Optional[Union[numpy.ndarray, List[int]]] = None, palette: Optional[Iterable[int]] = None, data_source: str = 'coco', mask: Optional[Union[list, numpy.ndarray]] = None, img_format: str = '%06d.png', start: int = 0, end: Optional[int] = None, overwrite: bool = False, with_file_name: bool = True, resolution: Optional[Union[Tuple[int, int], list]] = None, fps: Union[float, int] = 30, draw_bbox: bool = False, with_number: bool = False, pop_parts: Optional[Iterable[str]] = None, disable_tqdm: bool = False, disable_limbs: bool = False, return_array: Optional[bool] = False, keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})Union[None, numpy.ndarray][source]

Visualize 2d keypoints to a video or into a folder of frames.

Parameters
  • kp2d (np.ndarray) – should be array of shape (f * J * 2) or (f * n * J * 2)]

  • output_path (str) – output video path or image folder.

  • frame_list (Optional[List[str]], optional) – list of origin background frame paths, element in list each should be a image path like *.jpg or *.png. Higher priority than origin_frames. Use this when your file names is hard to sort or you only want to render a small number frames. Defaults to None.

  • origin_frames (Optional[str], optional) – origin background frame path, could be .mp4, .gif`(will be sliced into a folder) or an image folder. Lower priority than `frame_list. Defaults to None.

  • limbs (Optional[Union[np.ndarray, List[int]]], optional) – if not specified, the limbs will be searched by search_limbs, this option is for free skeletons like BVH file. Defaults to None.

  • palette (Iterable, optional) – specified palette, three int represents (B, G, R). Should be tuple or list. Defaults to None.

  • data_source (str, optional) – data source type. Defaults to ‘coco’.

  • mask (Optional[Union[list, np.ndarray]], optional) – mask to mask out the incorrect point. Pass a np.ndarray of shape (J,) or list of length J. Defaults to None.

  • img_format (str, optional) – input image format. Default to ‘%06d.png’,

  • start (int, optional) – start frame index. Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. None represents include all the frames.

  • overwrite (bool, optional) – whether replace the origin frames. Defaults to False.

  • with_file_name (bool, optional) – whether write origin frame name on the images. Defaults to True.

  • resolution (Optional[Union[Tuple[int, int], list]], optional) – (height, width) of the output video will be the same size as the original images if not specified. Defaults to None.

  • fps (Union[float, int], optional) – fps. Defaults to 30.

  • draw_bbox (bool, optional) – whether need to draw bounding boxes. Defaults to False.

  • with_number (bool, optional) – whether draw index number. Defaults to False.

  • pop_parts (Iterable[str], optional) – The body part names you do not want to visualize. Supported parts are [‘left_eye’,’right_eye’ ,’nose’, ‘mouth’, ‘face’, ‘left_hand’, ‘right_hand’]. Defaults to [].frame_list

  • disable_tqdm (bool, optional) – Whether to disable the entire progressbar wrapper. Defaults to False.

  • disable_limbs (bool, optional) – whether need to disable drawing limbs. Defaults to False.

  • return_array (bool, optional) – Whether to return images as a opencv array. Defaults to None.

  • keypoints_factory (dict, optional) – Dict of all the conventions. Defaults to KEYPOINTS_FACTORY.

Raises
  • FileNotFoundError – check output video path.

  • FileNotFoundError – check input frame paths.

Returns

Union[None, np.ndarray].

mmhuman3d.core.visualization.visualize_kp3d(kp3d: numpy.ndarray, output_path: Optional[str] = None, limbs: Optional[Union[numpy.ndarray, List[int]]] = None, palette: Optional[Iterable[int]] = None, data_source: str = 'coco', mask: Optional[Union[numpy.ndarray, tuple, list]] = None, start: int = 0, end: Optional[int] = None, resolution: Union[list, Tuple[int, int]] = (1024, 1024), fps: Union[float, int] = 30, frame_names: Optional[Union[List[str], str]] = None, orbit_speed: Union[float, int] = 0.5, value_range: Union[Tuple[int, int], list] = (- 100, 100), pop_parts: Iterable[str] = (), disable_limbs: bool = False, return_array: Optional[bool] = None, convention: str = 'opencv', keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})Union[None, numpy.ndarray][source]

Visualize 3d keypoints to a video with matplotlib. Support multi person and specified limb connections.

Parameters
  • kp3d (np.ndarray) – shape could be (f * J * 4/3/2) or (f * num_person * J * 4/3/2)

  • output_path (str) – output video path image folder.

  • limbs (Optional[Union[np.ndarray, List[int]]], optional) – if not specified, the limbs will be searched by search_limbs, this option is for free skeletons like BVH file. Defaults to None.

  • palette (Iterable, optional) – specified palette, three int represents (B, G, R). Should be tuple or list. Defaults to None.

  • data_source (str, optional) – data source type. Defaults to ‘coco’. choose in [‘coco’, ‘smplx’, ‘smpl’, ‘coco_wholebody’, ‘mpi_inf_3dhp’, ‘mpi_inf_3dhp_test’, ‘h36m’, ‘pw3d’, ‘mpii’]

  • mask (Optional[Union[list, tuple, np.ndarray]], optional) – mask to mask out the incorrect points. Defaults to None.

  • start (int, optional) – start frame index. Defaults to 0.

  • end (int, optional) – end frame index. Could be positive int or negative int or None. None represents include all the frames. Defaults to None.

  • resolution (Union[list, Tuple[int, int]], optional) – (width, height) of the output video will be the same size as the original images if not specified. Defaults to None.

  • fps (Union[float, int], optional) – fps. Defaults to 30.

  • frame_names (Optional[Union[List[str], str]], optional) – List(should be the same as frame numbers) or single string or string format (like ‘frame%06d’)for frame title, no title if None. Defaults to None.

  • orbit_speed (Union[float, int], optional) – orbit speed of camera. Defaults to 0.5.

  • value_range (Union[Tuple[int, int], list], optional) – range of axis value. Defaults to (-100, 100).

  • pop_parts (Iterable[str], optional) – The body part names you do not want to visualize. Choose in [‘left_eye’,’right_eye’, ‘nose’, ‘mouth’, ‘face’, ‘left_hand’, ‘right_hand’]Defaults to [].

  • disable_limbs (bool, optional) – whether need to disable drawing limbs. Defaults to False.

  • return_array (bool, optional) – Whether to return images as opencv array .If None, an array will be returned when frame number is below 100. Defaults to None.

  • keypoints_factory (dict, optional) – Dict of all the conventions. Defaults to KEYPOINTS_FACTORY.

Raises
  • TypeError – check the type of input keypoints.

  • FileNotFoundError – check the output video path.

Returns

Union[None, np.ndarray].

mmhuman3d.core.visualization.visualize_smpl_calibration(K, R, T, resolution, **kwargs)None[source]

Visualize a smpl mesh which has opencv calibration matrix defined in screen.

mmhuman3d.core.visualization.visualize_smpl_hmr(cam_transl, bbox=None, kp2d=None, focal_length=5000, det_width=224, det_height=224, bbox_format='xyxy', **kwargs)None[source]

Simplest way to visualize HMR or SPIN or Smplify pred smpl with origin frames and predicted cameras.

mmhuman3d.core.visualization.visualize_smpl_pose(poses=None, verts=None, **kwargs)None[source]

Simplest way to visualize a sequence of smpl pose.

Cameras will focus on the center of smpl mesh. orbit speed is recommended.

mmhuman3d.core.visualization.visualize_smpl_vibe(orig_cam=None, pred_cam=None, bbox=None, output_path='sample.mp4', resolution=None, aspect_ratio=1.0, bbox_scale_factor=1.25, bbox_format='xyxy', **kwargs)None[source]

Simplest way to visualize pred smpl with origin frames and predicted cameras.

mmhuman3d.models

models

mmhuman3d.models.build_architecture(cfg)[source]

Build framework.

mmhuman3d.models.build_backbone(cfg)[source]

Build backbone.

mmhuman3d.models.build_body_model(cfg)[source]

Build body model.

mmhuman3d.models.build_discriminator(cfg)[source]

Build discriminator.

mmhuman3d.models.build_head(cfg)[source]

Build head.

mmhuman3d.models.build_loss(cfg)[source]

Build loss.

mmhuman3d.models.build_neck(cfg)[source]

Build neck.

architectures

class mmhuman3d.models.architectures.HybrIK_trainer(backbone=None, neck=None, head=None, body_model=None, loss_beta=None, loss_theta=None, loss_twist=None, loss_uvd=None, init_cfg=None)[source]

Hybrik_trainer Architecture.

Parameters
  • backbone (dict | None, optional) – Backbone config dict. Default: None.

  • neck (dict | None, optional) – Neck config dict. Default: None

  • head (dict | None, optional) – Regressor config dict. Default: None.

  • body_model (dict | None, optional) – SMPL config dict. Default: None.

  • loss_beta (dict | None, optional) – Losses config dict for beta (shape parameters) estimation. Default: None

  • loss_theta (dict | None, optional) – Losses config dict for theta (pose parameters) estimation. Default: None

  • loss_twist (dict | None, optional) – Losses config dict for twist angles estimation. Default: None

  • init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None

compute_losses(predictions, targets)[source]

Compute regression losses for beta, theta, twist and uvd.

forward_test(img, img_metas, **kwargs)[source]

Test step function.

In this function, train step is carried out

with following the pipeline:

  1. extract features with the backbone

  2. feed the extracted features into the head to

    predicte beta, theta, twist angle, and heatmap (uvd map)

3. store predictions for evaluation :param img: Batch of data as input. :type img: torch.Tensor :param img_metas: Dict with image metas i.e. path :type img_metas: dict :param kwargs: Dict with ground-truth :type kwargs: dict

Returns

Dict with image_path, vertices, xyz_17, uvd_jts, xyz_24 for predictions.

Return type

all_preds (dict)

forward_train(img, img_metas, **kwargs)[source]

Train step function.

In this function, train step is carried out

with following the pipeline:

  1. extract features with the backbone

  2. feed the extracted features into the head to

    predicte beta, theta, twist angle, and heatmap (uvd map)

  3. compute regression losses of the predictions

    and optimize backbone and head

Parameters
  • img (torch.Tensor) – Batch of data as input.

  • kwargs (dict) – Dict with ground-truth

Returns

Dict with loss, information for logger, the number of samples.

Return type

output (dict)

class mmhuman3d.models.architectures.ImageBodyModelEstimator(backbone: Optional[dict] = None, neck: Optional[dict] = None, head: Optional[dict] = None, disc: Optional[dict] = None, registrant: Optional[dict] = None, body_model_train: Optional[dict] = None, body_model_test: Optional[dict] = None, convention: Optional[str] = 'human_data', loss_keypoints2d: Optional[dict] = None, loss_keypoints3d: Optional[dict] = None, loss_vertex: Optional[dict] = None, loss_smpl_pose: Optional[dict] = None, loss_smpl_betas: Optional[dict] = None, loss_camera: Optional[dict] = None, loss_adv: Optional[dict] = None, init_cfg: Optional[Union[list, dict]] = None)[source]
forward_test(img: torch.Tensor, img_metas: dict, **kwargs)[source]

Defines the computation performed at every call when testing.

class mmhuman3d.models.architectures.VideoBodyModelEstimator(backbone: Optional[dict] = None, neck: Optional[dict] = None, head: Optional[dict] = None, disc: Optional[dict] = None, registrant: Optional[dict] = None, body_model_train: Optional[dict] = None, body_model_test: Optional[dict] = None, convention: Optional[str] = 'human_data', loss_keypoints2d: Optional[dict] = None, loss_keypoints3d: Optional[dict] = None, loss_vertex: Optional[dict] = None, loss_smpl_pose: Optional[dict] = None, loss_smpl_betas: Optional[dict] = None, loss_camera: Optional[dict] = None, loss_adv: Optional[dict] = None, init_cfg: Optional[Union[list, dict]] = None)[source]
forward_test(img_metas: dict, **kwargs)[source]

Defines the computation performed at every call when testing.

backbones

class mmhuman3d.models.backbones.ResNet(depth, in_channels=3, stem_channels=None, base_channels=64, num_stages=4, strides=(1, 2, 2, 2), dilations=(1, 1, 1, 1), out_indices=(0, 1, 2, 3), style='pytorch', deep_stem=False, avg_down=False, frozen_stages=- 1, conv_cfg=None, norm_cfg={'requires_grad': True, 'type': 'BN'}, norm_eval=True, dcn=None, stage_with_dcn=(False, False, False, False), plugins=None, with_cp=False, zero_init_residual=True, pretrained=None, init_cfg=None)[source]

ResNet backbone. :param depth: Depth of resnet, from {18, 34, 50, 101, 152}. :type depth: int :param stem_channels: Number of stem channels. If not specified,

it will be the same as base_channels. Default: None.

Parameters
  • base_channels (int) – Number of base channels of res layer. Default: 64.

  • in_channels (int) – Number of input image channels. Default: 3.

  • num_stages (int) – Resnet stages. Default: 4.

  • strides (Sequence[int]) – Strides of the first block of each stage.

  • dilations (Sequence[int]) – Dilation of each stage.

  • out_indices (Sequence[int]) – Output from which stages.

  • style (str) – pytorch or caffe. If set to “pytorch”, the stride-two layer is the 3x3 conv layer, otherwise the stride-two layer is the first 1x1 conv layer.

  • deep_stem (bool) – Replace 7x7 conv in input stem with 3 3x3 conv

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck.

  • frozen_stages (int) – Stages to be frozen (stop grad and set eval mode). -1 means not freezing any parameters.

  • norm_cfg (dict) – Dictionary to construct and config norm layer.

  • norm_eval (bool) – Whether to set norm layers to eval mode, namely, freeze running stats (mean and var). Note: Effect on Batch Norm and its variants only.

  • plugins (list[dict]) –

    List of plugins for stages, each dict contains: - cfg (dict, required): Cfg dict to build plugin. - position (str, required): Position inside block to insert

    plugin, options are ‘after_conv1’, ‘after_conv2’, ‘after_conv3’.

    • stages (tuple[bool], optional): Stages to apply plugin, length should be same as ‘num_stages’.

  • with_cp (bool) – Use checkpoint or not. Using checkpoint will save some memory while slowing down the training speed.

  • zero_init_residual (bool) – Whether to use zero init for last norm layer in resblocks to let them behave as identity.

  • pretrained (str, optional) – model pretrained path. Default: None

  • init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None

Example

>>> from mmhuman3d.models import ResNet
>>> import torch
>>> self = ResNet(depth=18)
>>> self.eval()
>>> inputs = torch.rand(1, 3, 32, 32)
>>> level_outputs = self.forward(inputs)
>>> for level_out in level_outputs:
...     print(tuple(level_out.shape))
(1, 64, 8, 8)
(1, 128, 4, 4)
(1, 256, 2, 2)
(1, 512, 1, 1)
forward(x)[source]

Forward function.

make_res_layer(**kwargs)[source]

Pack all blocks in a stage into a ResLayer.

make_stage_plugins(plugins, stage_idx)[source]

Make plugins for ResNet stage_idx th stage. Currently we support to insert context_block, empirical_attention_block, nonlocal_block into the backbone like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of Bottleneck. An example of plugins format could be: .. rubric:: Examples

>>> plugins=[
...     dict(cfg=dict(type='xxx', arg1='xxx'),
...          stages=(False, True, True, True),
...          position='after_conv2'),
...     dict(cfg=dict(type='yyy'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='1'),
...          stages=(True, True, True, True),
...          position='after_conv3'),
...     dict(cfg=dict(type='zzz', postfix='2'),
...          stages=(True, True, True, True),
...          position='after_conv3')
... ]
>>> self = ResNet(depth=18)
>>> stage_plugins = self.make_stage_plugins(plugins, 0)
>>> assert len(stage_plugins) == 3

Suppose stage_idx=0, the structure of blocks in the stage would be: .. code-block:: none

conv1-> conv2->conv3->yyy->zzz1->zzz2

Suppose ‘stage_idx=1’, the structure of blocks in the stage would be: .. code-block:: none

conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2

If stages is missing, the plugin would be applied to all stages. :param plugins: List of plugins cfg to build. The postfix is

required if multiple same type plugins are inserted.

Parameters

stage_idx (int) – Index of stage to build

Returns

Plugins for current stage

Return type

list[dict]

property norm1

the normalization layer named “norm1”

Type

nn.Module

train(mode=True)[source]

Convert the model into training mode while keep normalization layer freezed.

class mmhuman3d.models.backbones.ResNetV1d(**kwargs)[source]

ResNetV1d variant described in Bag of Tricks. Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in the input stem with three 3x3 convs. And in the downsampling block, a 2x2 avg_pool with stride 2 is added before conv, whose stride is changed to 1.

discriminators

class mmhuman3d.models.discriminators.SMPLDiscriminator(beta_channel=(10, 5, 1), per_joint_channel=(9, 32, 32, 1), full_pose_channel=(736, 1024, 1024, 1))[source]

Discriminator for SMPL pose and shape parameters.

It is composed of a discriminator for SMPL shape parameters, a discriminator for SMPL pose parameters of all joints and a discriminator for SMPL pose parameters of each joint. :param beta_channel: Tuple of neuron count of the

discriminator of shape parameters. Defaults to (10, 5, 1)

Parameters
  • per_joint_channel (tuple of int) – Tuple of neuron count of the discriminator of each joint. Defaults to (9, 32, 32, 1)

  • full_pose_channel (tuple of int) – Tuple of neuron count of the discriminator of full pose. Defaults to (23*32, 1024, 1024, 1)

forward(thetas)[source]

Forward function.

init_weights()[source]

Initialize model weights.

necks

class mmhuman3d.models.necks.TemporalGRUEncoder(input_size: Optional[int] = 2048, num_layers: Optional[int] = 1, hidden_size: Optional[int] = 2048, init_cfg: Optional[Union[list, dict]] = None)[source]

TemporalEncoder used for VIBE. Adapted from https://github.com/mkocabas/VIBE.

Parameters
  • input_size (int, optional) – dimension of input feature. Default: 2048.

  • num_layer (int, optional) – number of layers for GRU. Default: 1.

  • hidden_size (int, optional) – hidden size for GRU. Default: 2048.

  • init_cfg (dict or list[dict], optional) – Initialization config dict. Default: None.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

heads

class mmhuman3d.models.heads.HMRHead(feat_dim, smpl_mean_params=None, npose=144, nbeta=10, ncam=3, hdim=1024, init_cfg=None)[source]
forward(x, init_pose=None, init_shape=None, init_cam=None, n_iter=3)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class mmhuman3d.models.heads.HybrIKHead(feature_channel=512, deconv_dim=[256, 256, 256], num_joints=29, depth_dim=64, height_dim=64, width_dim=64, smpl_mean_params=None)[source]

HybrIK parameters regressor head.

Parameters
  • feature_channel (int) – Number of input channels

  • deconv_dim (List[int]) – List of deconvolution dimensions

  • num_joints (int) – Number of keypoints

  • depth_dim (int) – Depth dimension

  • height_dim (int) – Height dimension

  • width_dim (int) – Width dimension

  • smpl_mean_params (str) – file name of the mean SMPL parameters

flip_phi(pred_phi)[source]

Flip phi.

Parameters

pred_phi (torch.Tensor) – phi in shape (Num_twistx2)

Returns

flipped phi in shape (Num_twistx2)

Return type

pred_phi (torch.Tensor)

flip_uvd_coord(pred_jts, flip=False, flatten=True)[source]

Flip uvd coordinates.

Parameters
  • pred_jts (torch.Tensor) – predicted uvd coordinates with shape (Bx87)

  • flip (bool) – Store True to flip uvd coordinates. Default: False.

  • flatten (bool) – Store True to reshape uvd_coordinates to shape (Bx29x3) Default: True

Returns

flipped uvd coordinates with shape (Bx29x3)

Return type

pred_jts (torch.Tensor)

forward(feature, trans_inv, intrinsic_param, joint_root, depth_factor, smpl_layer, flip_item=None, flip_output=False)[source]

Forward function.

Parameters
  • feature (torch.Tensor) – features extracted from backbone

  • trans_inv (torch.Tensor) – inverse affine transformation matrix with shape (Bx2x3)

  • intrinsic_param (torch.Tensor) – camera intrinsic matrix with shape (Bx3x3)

  • joint_root (torch.Tensor) – root joint coordinate with shape (Bx3)

  • depth_factor (float) – depth factor with shape (Bx1)

  • smpl_layer (torch.Tensor) – smpl body model

  • flip_item (List[torch.Tensor]|None) – list containing items to flip

  • flip_output (bool) – Store True to flip output. Default: False

Returns

Dict containing model predictions.

Return type

output (dict)

uvd_to_cam(uvd_jts, trans_inv, intrinsic_param, joint_root, depth_factor, return_relative=True)[source]

Project uvd coordinates to camera frame.

Parameters
  • uvd_jts (torch.Tensor) – uvd coordinates with shape (BxNum_jointsx3)

  • trans_inv (torch.Tensor) – inverse affine transformation matrix with shape (Bx2x3)

  • intrinsic_param (torch.Tensor) – camera intrinsic matrix with shape (Bx3x3)

  • joint_root (torch.Tensor) – root joint coordinate with shape (Bx3)

  • depth_factor (float) – depth factor with shape (Bx1)

  • return_relative (bool) – Store True to return root normalized relative coordinates. Default: True.

Returns

uvd coordinates in camera frame with shape (BxNum_jointsx3)

Return type

xyz_jts (torch.Tensor)

losses

class mmhuman3d.models.losses.CameraPriorLoss(scale=10, reduction='mean', loss_weight=1.0)[source]

Prior loss for predicted camera.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • scale (float, optional) – The scale coefficient for regularizing camera parameters. Defaults to 10

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

forward(cameras, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • cameras (torch.Tensor) – The predicted camera parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.GANLoss(gan_type, real_label_val=1.0, fake_label_val=0.0, loss_weight=1.0)[source]

Define GAN loss.

Parameters
  • gan_type (str) – Support ‘vanilla’, ‘lsgan’, ‘wgan’, ‘hinge’.

  • real_label_val (float) – The value for real label. Default: 1.0.

  • fake_label_val (float) – The value for fake label. Default: 0.0.

  • loss_weight (float) – Loss weight. Default: 1.0. Note that loss_weight is only for generators; and it is always 1.0 for discriminators.

forward(input, target_is_real, is_disc=False)[source]
Parameters
  • input (Tensor) – The input for the loss module, i.e., the network prediction.

  • target_is_real (bool) – Whether the targe is real or fake.

  • is_disc (bool) – Whether the loss for discriminators or not. Default: False.

Returns

GAN loss value.

Return type

Tensor

get_target_label(input, target_is_real)[source]

Get target label.

Parameters
  • input (Tensor) – Input tensor.

  • target_is_real (bool) – Whether the target is real or fake.

Returns

Target tensor. Return bool for wgan, otherwise,

return Tensor.

Return type

(bool | Tensor)

class mmhuman3d.models.losses.JointPriorLoss(reduction='mean', loss_weight=1.0, use_full_body=False, smooth_spine=False, smooth_spine_loss_weight=1.0)[source]

Prior loss for joint angles.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

  • use_full_body (bool, optional) – Use full set of joint constraints (in standard joint angles).

  • smooth_spine (bool, optional) – Ensuring smooth spine rotations

  • smooth_spine_loss_weight (float, optional) – An additional weight factor multiplied on smooth spine loss

forward(body_pose, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • body_pose (torch.Tensor) – The body pose parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.KeypointMSELoss(reduction='mean', loss_weight=1.0, sigma=1.0)[source]

MSELoss for 2D and 3D keypoints.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

  • sigma (float, optional) – Weighing parameter of Geman-McClure error function. Defaults to 1.0 (no effect).

forward(pred, target, pred_conf=None, target_conf=None, keypoint_weight=None, avg_factor=None, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • pred (torch.Tensor) – The prediction. Shape should be (N, K, 2/3) B: batch size. K: number of keypoints.

  • target (torch.Tensor) – The learning target of the prediction. Shape should be the same as pred.

  • pred_conf (optional, torch.Tensor) – Confidence of predicted keypoints. Shape should be (N, K).

  • target_conf (optional, torch.Tensor) – Confidence of target keypoints. Shape should be the same as pred_conf.

  • keypoint_weight (optional, torch.Tensor) – keypoint-wise weight. shape should be (K,). This weight allow different weights to be assigned at different body parts.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • loss_weight_override (float, optional) – The overall weight of loss used to override the original weight of loss.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.L1Loss(reduction='mean', loss_weight=1.0)[source]

L1 loss.

Parameters
  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of loss.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmhuman3d.models.losses.MSELoss(reduction='mean', loss_weight=1.0)[source]

MSELoss.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

forward(pred, target, weight=None, avg_factor=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – Weight of the loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.MaxMixturePrior(prior_folder='data', num_gaussians=8, dtype=torch.float32, epsilon=1e-16, use_merged=True, reduction=None, loss_weight=1.0)[source]

Ref: SMPLify-X https://github.com/vchoutas/smplify-x/blob/master/smplifyx/prior.py

forward(body_pose, loss_weight_override=None, reduction_override=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_mean()[source]

Returns the mean of the mixture.

log_likelihood(pose)[source]

Create graph operation for negative log-likelihood calculation.

class mmhuman3d.models.losses.ShapePriorLoss(reduction='mean', loss_weight=1.0)[source]

Prior loss for body shape parameters.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

forward(betas, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • betas (torch.Tensor) – The body shape parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.SmoothJointLoss(reduction='mean', loss_weight=1.0, degree=False)[source]

Smooth loss for joint angles.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

  • degree (bool, optional) – The flag which represents whether the input tensor is in degree or radian.

forward(body_pose, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • body_pose (torch.Tensor) – The body pose parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.SmoothL1Loss(beta=1.0, reduction='mean', loss_weight=1.0)[source]

Smooth L1 loss.

Parameters
  • beta (float, optional) – The threshold in the piecewise function. Defaults to 1.0.

  • reduction (str, optional) – The method to reduce the loss. Options are “none”, “mean” and “sum”. Defaults to “mean”.

  • loss_weight (float, optional) – The weight of loss.

forward(pred, target, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

Parameters
  • pred (torch.Tensor) – The prediction.

  • target (torch.Tensor) – The learning target of the prediction.

  • weight (torch.Tensor, optional) – The weight of loss for each prediction. Defaults to None.

  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None.

class mmhuman3d.models.losses.SmoothPelvisLoss(reduction='mean', loss_weight=1.0, degree=False)[source]

Smooth loss for pelvis angles.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

  • degree (bool, optional) – The flag which represents whether the input tensor is in degree or radian.

forward(global_orient, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • global_orient (torch.Tensor) – The global orientation parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

class mmhuman3d.models.losses.SmoothTranslationLoss(reduction='mean', loss_weight=1.0)[source]

Smooth loss for translations.

Parameters
  • reduction (str, optional) – The method that reduces the loss to a scalar. Options are “none”, “mean” and “sum”.

  • loss_weight (float, optional) – The weight of the loss. Defaults to 1.0

forward(translation, loss_weight_override=None, reduction_override=None)[source]

Forward function of loss.

Parameters
  • translation (torch.Tensor) – The body translation parameters

  • loss_weight_override (float, optional) – The weight of loss used to override the original weight of loss

  • reduction_override (str, optional) – The reduction method used to override the original reduction method of the loss. Defaults to None

Returns

The calculated loss

Return type

torch.Tensor

mmhuman3d.models.losses.convert_to_one_hot(targets: torch.Tensor, classes)torch.Tensor[source]

This function converts target class indices to one-hot vectors, given the number of classes.

Parameters
  • targets (Tensor) – The ground truth label of the prediction with shape (N, 1)

  • classes (int) – the number of classes.

Returns

Processed loss values.

Return type

Tensor

mmhuman3d.models.losses.reduce_loss(loss, reduction)[source]

Reduce loss as specified.

Parameters
  • loss (Tensor) – Elementwise loss tensor.

  • reduction (str) – Options are “none”, “mean” and “sum”.

Returns

Reduced loss tensor.

Return type

Tensor

mmhuman3d.models.losses.weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None)[source]

Apply element-wise weight and reduce loss.

Parameters
  • loss (Tensor) – Element-wise loss.

  • weight (Tensor) – Element-wise weights.

  • reduction (str) – Same as built-in losses of PyTorch.

  • avg_factor (float) – Average factor when computing the mean of losses.

Returns

Processed loss values.

Return type

Tensor

mmhuman3d.models.losses.weighted_loss(loss_func)[source]

Create a weighted version of a given loss function.

To use this decorator, the loss function must have the signature like loss_func(pred, target, **kwargs). The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like loss_func(pred, target, weight=None, reduction=’mean’, avg_factor=None, **kwargs).

Example

>>> import torch
>>> @weighted_loss
>>> def l1_loss(pred, target):
>>>     return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3])
>>> target = torch.Tensor([1, 1, 1])
>>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target)
tensor(1.3333)
>>> l1_loss(pred, target, weight)
tensor(1.)
>>> l1_loss(pred, target, reduction='none')
tensor([1., 1., 2.])
>>> l1_loss(pred, target, weight, avg_factor=2)
tensor(1.5000)

utils

class mmhuman3d.models.utils.FitsDict(fits='static')[source]

Dictionary keeping track of the best fit per image in the training set.

Ref: https://github.com/nkolot/SPIN/blob/master/train/fits_dict.py

flip_pose(pose, is_flipped)[source]

flip SMPL pose parameters.

rotate_pose(pose, rot)[source]

Rotate SMPL pose parameters by rot degrees.

save()[source]

Save dictionary state to disk.

class mmhuman3d.models.utils.ResLayer(block, inplanes, planes, num_blocks, stride=1, avg_down=False, conv_cfg=None, norm_cfg={'type': 'BN'}, downsample_first=True, **kwargs)[source]

ResLayer to build ResNet style backbone.

Parameters
  • block (nn.Module) – block used to build ResLayer.

  • inplanes (int) – inplanes of block.

  • planes (int) – planes of block.

  • num_blocks (int) – number of blocks.

  • stride (int) – stride of the first block. Default: 1

  • avg_down (bool) – Use AvgPool instead of stride conv when downsampling in the bottleneck. Default: False

  • conv_cfg (dict) – dictionary to construct and config conv layer. Default: None

  • norm_cfg (dict) – dictionary to construct and config norm layer. Default: dict(type=’BN’)

  • downsample_first (bool) – Downsample at the first block or last block. False for Hourglass, True for ResNet. Default: True

class mmhuman3d.models.utils.SimplifiedBasicBlock(inplanes, planes, stride=1, dilation=1, downsample=None, style='pytorch', with_cp=False, conv_cfg=None, norm_cfg={'type': 'BN'}, dcn=None, plugins=None, init_fg=None)[source]

Simplified version of original basic residual block. This is used in SCNet.

  • Norm layer is now optional

  • Last ReLU in forward function is removed

forward(x)[source]

Forward function.

property norm1

normalization layer after the first convolution layer

Type

nn.Module

property norm2

normalization layer after the second convolution layer

Type

nn.Module

mmhuman3d.models.utils.batch_inverse_kinematics_transform(pose_skeleton, global_orient, phis, rest_pose, children, parents, dtype=torch.float32, train=False, leaf_thetas=None)[source]

Applies inverse kinematics transform to joints in a batch.

Parameters
  • pose_skeleton (torch.tensor) – Locations of estimated pose skeleton with shape (Bx29x3)

  • global_orient (torch.tensor|none) – Tensor of global rotation matrices with shape (Bx1x3x3)

  • phis (torch.tensor) – Rotation on bone axis parameters with shape (Bx23x2)

  • rest_pose (torch.tensor) – Locations of rest (Template) pose with shape (Bx29x3)

  • children (List[int]) – list of indexes of kinematic children with len 29

  • parents (List[int]) – list of indexes of kinematic parents with len 29

  • dtype (torch.dtype, optional) – Data type of the created tensors. Default: torch.float32

  • train (bool) – Store True in train mode. Default: False

  • leaf_thetas (torch.tensor, optional) – Rotation matrixes for 5 leaf joints (Bx5x3x3). Default: None

Returns

Rotation matrics of all joints with shape (Bx29x3x3) rotate_rest_pose (torch.tensor):

Locations of rotated rest/ template pose with shape (Bx29x3)

Return type

rot_mats (torch.tensor)

mmhuman3d.data

data

datasets

class mmhuman3d.data.datasets.AdversarialDataset(train_dataset: torch.utils.data.dataset.Dataset, adv_dataset: torch.utils.data.dataset.Dataset)[source]

Mix Dataset for the adversarial training in 3D human mesh estimation task.

The dataset combines data from two datasets and return a dict containing data from two datasets. :param train_dataset: Dataset for 3D human mesh estimation. :type train_dataset: Dataset :param adv_dataset: Dataset for adversarial learning. :type adv_dataset: Dataset

class mmhuman3d.data.datasets.BaseDataset(data_prefix: str, pipeline: list, ann_file: Optional[str] = None, test_mode: Optional[bool] = False, dataset_name: Optional[str] = None)[source]

Base dataset.

Parameters
  • data_prefix (str) – the prefix of data path.

  • pipeline (list) – a list of dict, where each element represents a operation defined in mmhuman3d.datasets.pipelines.

  • ann_file (str | None, optional) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix.

  • test_mode (bool) – in train mode or test mode. Default: None.

  • dataset_name (str | None, optional) – the name of dataset. It is used to identify the type of evaluation metric. Default: None.

abstract load_annotations()[source]

Load annotations from ann_file

prepare_data(idx: int)[source]

“Prepare raw data for the f’{idx’}-th data.

class mmhuman3d.data.datasets.Compose(transforms)[source]

Compose a data pipeline with a sequence of transforms.

Parameters

transforms (list[dict | callable]) – Either config dicts of transforms or transform objects.

class mmhuman3d.data.datasets.ConcatDataset(datasets: list)[source]

A wrapper of concatenated dataset.

Same as torch.utils.data.dataset.ConcatDataset, but add get_cat_ids function.

Parameters

datasets (list[Dataset]) – A list of datasets.

class mmhuman3d.data.datasets.DistributedSampler(dataset, num_replicas=None, rank=None, shuffle=True, round_up=True)[source]
class mmhuman3d.data.datasets.HumanImageDataset(data_prefix: str, pipeline: list, dataset_name: str, body_model: Optional[dict] = None, ann_file: Optional[str] = None, convention: Optional[str] = 'human_data', test_mode: Optional[bool] = False)[source]

Human Image Dataset.

Parameters
  • data_prefix (str) – the prefix of data path.

  • pipeline (list) – a list of dict, where each element represents a operation defined in mmhuman3d.datasets.pipelines.

  • dataset_name (str | None) – the name of dataset. It is used to identify the type of evaluation metric. Default: None.

  • body_model (dict | None, optional) – the config for body model, which will be used to generate meshes and keypoints. Default: None.

  • ann_file (str | None, optional) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix.

  • convention (str, optional) – keypoints convention. Keypoints will be converted from “human_data” to the given one. Default: “human_data”

  • test_mode (bool, optional) – in train mode or test mode. Default: False.

evaluate(outputs: list, res_folder: str, metric: Optional[str] = 'joint_error')[source]

Evaluate 3D keypoint results.

Parameters
  • outputs (list) – results from model inference.

  • res_folder (str) – path to store results.

  • metric (str) – the type of metric. Default: ‘joint_error’

Returns

A dict of all evaluation results.

Return type

dict

get_annotation_file()[source]

Get path of the annotation file.

load_annotations()[source]

Load annotation from the annotation file.

Here we simply use HumanData to parse the annotation.

prepare_data(idx: int)[source]

Generate and transform data.

prepare_raw_data(idx: int)[source]

Get item from self.human_data.

class mmhuman3d.data.datasets.HumanVideoDataset(data_prefix: str, pipeline: list, dataset_name: str, seq_len: Optional[int] = 16, overlap: Optional[float] = 0.0, only_vid_name: Optional[bool] = False, body_model: Optional[dict] = None, ann_file: Optional[str] = None, convention: Optional[str] = 'human_data', test_mode: Optional[bool] = False)[source]

Human Video Dataset.

Parameters
  • data_prefix (str) – the prefix of data path.

  • pipeline (list) – a list of dict, where each element represents a operation defined in mmhuman3d.datasets.pipelines.

  • dataset_name (str | None) – the name of dataset. It is used to identify the type of evaluation metric. Default: None.

  • seq_len (int, optional) – the length of input sequence. Default: 16.

  • overlap (float, optional) – the overlap between different sequences. Default: 0

  • only_vid_name (bool, optional) – the format of image_path. If only_vid_name is true, image_path only contains the video name. Otherwise, image_path contains both video_name and frame index.

  • body_model (dict | None, optional) – the config for body model, which will be used to generate meshes and keypoints. Default: None.

  • ann_file (str | None, optional) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix.

  • convention (str, optional) – keypoints convention. Keypoints will be converted from “human_data” to the given one. Default: “human_data”

  • test_mode (bool, optional) – in train mode or test mode. Default: False.

prepare_data(idx: int)[source]

Prepare data for each chunk.

Step 1: get annotation from each frame. Step 2: add metas of each chunk.

class mmhuman3d.data.datasets.HybrIKHumanImageDataset(data_prefix, pipeline, dataset_name, ann_file, test_mode=False)[source]

Dataset for HybrIK training. The dataset loads raw features and apply specified transforms to return a dict containing the image tensors and other information.

Parameters
  • data_prefix (str) – Path to a directory where preprocessed datasets are held.

  • pipeline (list[dict | callable]) – A sequence of data transforms.

  • dataset_name (str) – accepted names include ‘h36m’, ‘pw3d’, ‘mpi_inf_3dhp’, ‘coco’

  • ann_file (str) – Name of annotation file.

  • test_mode (bool) – Store True when building test dataset. Default: False.

evaluate(outputs, res_folder, metric='joint_error', logger=None)[source]

Evaluate 3D keypoint results.

static get_3d_keypoints_vis(keypoints)[source]

Get 3d keypoints and visibility mask :param keypoints: 2d (NxKx3) or 3d (NxKx4) keypoints with

visibility. N refers to number of datapoints, K refers to number of keypoints.

Returns

(NxKx3) 3d keypoints joint_vis (np.ndarray): (NxKx3) visibility mask for keypoints

Return type

joint_img (np.ndarray)

get_annotation_file()[source]

Obtain annotation file path from data prefix.

load_annotations()[source]

Load annotations.

class mmhuman3d.data.datasets.MeshDataset(data_prefix: str, pipeline: list, dataset_name: str, ann_file: Optional[str] = None, test_mode: Optional[bool] = False)[source]

Mesh Dataset. This dataset only contains smpl data.

Parameters
  • data_prefix (str) – the prefix of data path.

  • pipeline (list) – a list of dict, where each element represents a operation defined in mmhuman3d.datasets.pipelines.

  • dataset_name (str | None) – the name of dataset. It is used to identify the type of evaluation metric. Default: None.

  • ann_file (str | None, optional) – the annotation file. When ann_file is str, the subclass is expected to read from the ann_file. When ann_file is None, the subclass is expected to read according to data_prefix.

  • test_mode (bool, optional) – in train mode or test mode. Default: False.

load_annotations()[source]

Load annotations from ann_file

class mmhuman3d.data.datasets.MixedDataset(configs: list, partition: list, num_data: Optional[int] = None)[source]

Mixed Dataset.

Parameters
  • config (list) – the list of different datasets.

  • partition (list) – the ratio of datasets in each batch.

  • num_data (int | None, optional) – if num_data is not None, the number of iterations is set to this fixed value. Otherwise, the number of iterations is set to the maximum size of each single dataset. Default: None.

class mmhuman3d.data.datasets.RepeatDataset(dataset: torch.utils.data.dataset.Dataset, times: int)[source]

A wrapper of repeated dataset.

The length of repeated dataset will be times larger than the original dataset. This is useful when the data loading time is long but the dataset is small. Using RepeatDataset can reduce the data loading time between epochs.

Parameters
  • dataset (Dataset) – The dataset to be repeated.

  • times (int) – Repeat times.

mmhuman3d.data.datasets.build_dataloader(dataset: torch.utils.data.dataset.Dataset, samples_per_gpu: int, workers_per_gpu: int, num_gpus: Optional[int] = 1, dist: Optional[bool] = True, shuffle: Optional[bool] = True, round_up: Optional[bool] = True, seed: Optional[int] = None, **kwargs)[source]

Build PyTorch DataLoader.

In distributed training, each GPU/process has a dataloader. In non-distributed training, there is only one dataloader for all GPUs.

Parameters
  • dataset (Dataset) – A PyTorch dataset.

  • samples_per_gpu (int) – Number of training samples on each GPU, i.e., batch size of each GPU.

  • workers_per_gpu (int) – How many subprocesses to use for data loading for each GPU.

  • num_gpus (int, optional) – Number of GPUs. Only used in non-distributed training.

  • dist (bool, optional) – Distributed training/test or not. Default: True.

  • shuffle (bool, optional) – Whether to shuffle the data at every epoch. Default: True.

  • round_up (bool, optional) – Whether to round up the length of dataset by adding extra samples to make it evenly divisible. Default: True.

  • kwargs – any keyword argument to be used to initialize DataLoader

Returns

A PyTorch dataloader.

Return type

DataLoader

mmhuman3d.data.datasets.build_dataset(cfg: Union[dict, list, tuple], default_args: Optional[dict] = None)[source]

“Build dataset by the given config.

data_converters

data_structures

class mmhuman3d.data.data_structures.HumanData(*args: Any, **kwargs: Any)[source]
check_keypoints_compressed()bool[source]

Check whether the keypoints are compressed.

Returns

Whether the keypoints are compressed.

Return type

bool

compress_keypoints_by_mask()[source]

If a key contains ‘keypoints’, and f’{key}_mask’ is in self.keys(), invalid zeros will be removed and f’{key}_mask’ will be locked.

Raises

KeyError – A key contains ‘keypoints’ has been found but its corresponding mask is missing.

decompress_keypoints()[source]

If a key contains ‘keypoints’, and f’{key}_mask’ is in self.keys(), invalid zeros will be inserted to the right places and f’{key}_mask’ will be unlocked.

Raises

KeyError – A key contains ‘keypoints’ has been found but its corresponding mask is missing.

dump(npz_path: str, overwrite: bool = True)[source]

Dump keys and items to an npz file.

Parameters
  • npz_path (str) – Path to a dumped npz file.

  • overwrite (bool, optional) – Whether to overwrite if there is already a file. Defaults to True.

Raises
  • ValueError – npz_path does not end with ‘.npz’.

  • FileExistsError – When overwrite is False and file exists.

dump_by_pickle(pkl_path: str, overwrite: bool = True)[source]

Dump keys and items to a pickle file. It’s a secondary dump method, when a HumanData instance is too large to be dumped by self.dump()

Parameters
  • pkl_path (str) – Path to a dumped pickle file.

  • overwrite (bool, optional) – Whether to overwrite if there is already a file. Defaults to True.

Raises
  • ValueError – npz_path does not end with ‘.pkl’.

  • FileExistsError – When overwrite is False and file exists.

classmethod fromfile(npz_path: str)[source]

Construct a HumanData instance from an npz file.

Parameters

npz_path (str) – Path to a dumped npz file.

Returns

A HumanData instance load from file.

Return type

HumanData

get_key_strict()bool[source]

Get value of attribute key_strict.

Returns

Whether to raise error when setting unsupported keys.

Return type

bool

get_raw_value(key: mmhuman3d.data.data_structures.human_data._KT)mmhuman3d.data.data_structures.human_data._VT[source]

Get raw value from the dict. It acts the same as dict.__getitem__(k).

Parameters

key (_KT) – Key in dict.

Returns

Value to the key.

Return type

_VT

get_temporal_slice(stop: int)[source]
get_temporal_slice(start: int, stop: int)
get_temporal_slice(start: int, stop: int, step: int)

Slice all temporal values along timeline dimension.

Parameters
  • arg_0 (int) – When arg_1 is None, arg_0 is stop and start=0. When arg_1 is not None, arg_0 is start.

  • arg_1 (Union[int, Any], optional) – None or where to stop. Defaults to None.

  • step (int, optional) – Length of step. Defaults to 1.

Returns

A new HumanData instance with sliced values.

Return type

HumanData

get_value_in_shape(key: mmhuman3d.data.data_structures.human_data._KT, shape: Union[list, tuple], padding_constant: int = 0)numpy.ndarray[source]

Get value in a specific shape. For each dim, if the required shape is smaller than current shape, ndarray will be sliced. Otherwise, it will be padded with padding_constant at the end.

Parameters
  • key (_KT) – Key in dict. The value of this key must be an instance of numpy.ndarray.

  • shape (Union[list, tuple]) – Shape of the returned array. Its length must be equal to value.ndim. Set -1 for a dimension if you do not want to edit it.

  • padding_constant (int, optional) – The value to set the padded values for each axis. Defaults to 0.

Raises

ValueError – A value in shape is neither positive integer nor -1.

Returns

An array in required shape.

Return type

np.ndarray

load(npz_path: str)[source]

Load data from npz_path and update them to self.

Parameters

npz_path (str) – Path to a dumped npz file.

load_by_pickle(pkl_path: str)[source]

Load data from pkl_path and update them to self.

When a HumanData Instance was dumped by self.dump_by_pickle(), use this to load. :param npz_path: Path to a dumped npz file. :type npz_path: str

classmethod new(source_dict: Optional[dict] = None, key_strict: bool = False)[source]

Construct a HumanData instance from a dict.

Parameters
  • source_dict (dict, optional) – A dict with items in HumanData fashion. Defaults to None.

  • key_strict (bool, optional) – Whether to raise error when setting unsupported keys. Defaults to False.

Returns

A HumanData instance.

Return type

HumanData

pop_unsupported_items()[source]

Find every item with a key not in HumanData.SUPPORTED_KEYS, and pop it to save memory.

set_key_strict(value: bool)[source]

Set value of attribute key_strict.

Parameters

value (bool, optional) – Whether to raise error when setting unsupported keys. Defaults to True.

classmethod set_logger(logger: Optional[Union[logging.Logger, str]] = None)[source]

Set logger of HumanData class.

Parameters

logger (logging.Logger | str | None, optional) – The way to print summary. See mmcv.utils.print_log() for details. Defaults to None.

set_raw_value(key: mmhuman3d.data.data_structures.human_data._KT, val: mmhuman3d.data.data_structures.human_data._VT)None[source]

Set the raw value of self[key] to val after key check. It acts the same as dict.__setitem__(self, key, val) if the key satisfied constraints.

Parameters
  • key (_KT) – Key in dict.

  • val (_VT) – Value to the key.

Raises
  • KeyError – self.get_key_strict() is True and key cannot be found in HumanData.SUPPORTED_KEYS.

  • ValueError – Value is supported but doesn’t match definition.

property temporal_len: int

Get the temporal length of this HumanData instance.

Returns

Number of frames related to this instance.

Return type

int

to(device: Optional[Union[torch.device, str]] = device(type='cpu'), dtype: Optional[torch.dtype] = None, non_blocking: Optional[bool] = False, copy: Optional[bool] = False, memory_format: Optional[torch.memory_format] = None)dict[source]

Convert values in numpy.ndarray type to torch.Tensor, and move Tensors to the target device. All keys will exist in the returned dict.

Parameters
  • device (Union[torch.device, str], optional) – A specified device. Defaults to CPU_DEVICE.

  • dtype (torch.dtype, optional) – The data type of the expected torch.Tensor. If dtype is None, it is decided according to numpy.ndarry. Defaults to None.

  • non_blocking (bool, optional) – When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Defaults to False.

  • copy (bool, optional) – When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion. No matter what value copy is, Tensor constructed from numpy will not share the same memory with the source numpy.ndarray. Defaults to False.

  • memory_format (torch.memory_format, optional) – The desired memory format of returned Tensor. Not supported by pytorch-cpu. Defaults to None.

Returns

A dict with all numpy.ndarray values converted into torch.Tensor and all Tensors moved to the target device.

Return type

dict

mmhuman3d.utils

class mmhuman3d.utils.DistOptimizerHook(grad_clip=None, coalesce=True, bucket_size_mb=- 1)[source]
class mmhuman3d.utils.Existence(value)[source]

State of file existence.

mmhuman3d.utils.aa_to_ee(axis_angle: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert axis angles to euler angle.

Parameters
  • axis_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.aa_to_quat(axis_angle: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert axis_angle to quaternions. :param axis_angle: input shape

should be (…, 3). ndim of input is unlimited.

Returns

shape would be (…, 4).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.aa_to_rot6d(axis_angle: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert axis angles to rotation 6d representations.

Parameters

axis_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

Returns

shape would be (…, 6).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.aa_to_rotmat(axis_angle: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert axis_angle to rotation matrixs. :param axis_angle: input shape

should be (…, 3). ndim of input is unlimited.

Returns

shape would be (…, 3, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.aa_to_sja(axis_angle: Union[torch.Tensor, numpy.ndarray], R_t: Union[torch.Tensor, numpy.ndarray] = tensor([[[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]]]), R_t_inv: Union[torch.Tensor, numpy.ndarray] = tensor([[[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]]]))Union[torch.Tensor, numpy.ndarray][source]

Convert axis-angles to standard joint angles.

Parameters
  • axis_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3), ndim of input is unlimited.

  • R_t (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system, ndim of input is unlimited.

  • R_t_inv (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system, ndim of input is unlimited.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.array_to_images(image_array: numpy.ndarray, output_folder: str, img_format: str = '%06d.png', resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Convert an array to images directly.

Parameters
  • image_array (np.ndarray) – shape should be (f * h * w * 3).

  • output_folder (str) – output folder for the images.

  • img_format (str, optional) – format of the images. Defaults to ‘%06d.png’.

  • (Optional[Union[Tuple[int (resolution) – optional): resolution(height, width) of output. Defaults to None.

  • int] – optional): resolution(height, width) of output. Defaults to None.

  • Tuple[float – optional): resolution(height, width) of output. Defaults to None.

  • float]]] – optional): resolution(height, width) of output. Defaults to None.

:paramoptional): resolution(height, width) of output.

Defaults to None.

Parameters

disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check output folder.

  • TypeError – check input array.

Returns

None

mmhuman3d.utils.array_to_video(image_array: numpy.ndarray, output_path: str, fps: Union[int, float] = 30, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Convert an array to a video directly, gif not supported.

Parameters
  • image_array (np.ndarray) – shape should be (f * h * w * 3).

  • output_path (str) – output video file path.

  • fps (Union[int, float, optional) – fps. Defaults to 30.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of the output video. Defaults to None.

  • int] – optional): (height, width) of the output video. Defaults to None.

  • Tuple[float – optional): (height, width) of the output video. Defaults to None.

  • float]]] – optional): (height, width) of the output video. Defaults to None.

:paramoptional): (height, width) of the output video.

Defaults to None.

Parameters

disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check output path.

  • TypeError – check input array.

Returns

None.

mmhuman3d.utils.batch_rodrigues(theta)[source]

Convert axis-angle representation to rotation matrix.

Parameters

theta – size = [B, 3]

Returns

Rotation matrix corresponding to the quaternion – size = [B, 3, 3]

mmhuman3d.utils.box2cs(bbox_xywh, aspect_ratio=1.0, bbox_scale_factor=1.25)[source]

Convert xywh coordinates to center and scale.

Args: bbox_xywh (numpy.ndarray): the height of the bbox_xywh aspect_ratio (int, optional): Defaults to 1.0 bbox_scale_factor (float, optional): Defaults to 1.25 :returns: center of the bbox

numpy.ndarray: the scale of the bbox w & h

Return type

numpy.ndarray

mmhuman3d.utils.check_input_path(input_path: str, allowed_suffix: List[str] = [], tag: str = 'input file', path_type: typing_extensions.Literal[file, dir, auto] = 'auto')[source]

Check input folder or file.

Parameters
  • input_path (str) – input folder or file path.

  • allowed_suffix (List[str], optional) – Check the suffix of input_path. If folder, should be [] or [‘’]. If could both be folder or file, should be [suffixs…, ‘’]. Defaults to [].

  • tag (str, optional) – The string tag to specify the output type. Defaults to ‘output file’.

  • path_type (Literal[, optional) – Choose file for file and directory for folder. Choose auto if allowed to be both. Defaults to ‘auto’.

Raises

FileNotFoundError – file does not exists or suffix does not match.

Returns

None

mmhuman3d.utils.check_path_existence(path_str: str, path_type: typing_extensions.Literal[file, dir, auto] = 'auto')mmhuman3d.utils.path_utils.Existence[source]

Check whether a file or a directory exists at the expected path.

Parameters
  • path_str (str) – Path to check.

  • path_type (Literal[, optional) – What kind of file do we expect at the path. Choose among file, dir, auto. Defaults to ‘auto’. path_type = path_type.lower()

Raises

KeyError – if path_type conflicts with path_str

Returns

  1. FileExist: file at path_str exists.

  2. DirectoryExistEmpty: folder at path exists and.

  3. DirectoryExistNotEmpty: folder at path_str exists and not empty.

  4. MissingParent: its parent doesn’t exist.

  5. DirectoryNotExist: expect a folder at path_str, but not found.

  6. FileNotExist: expect a file at path_str, but not found.

Return type

Existence

mmhuman3d.utils.check_path_suffix(path_str: str, allowed_suffix: Union[str, List[str]] = '')bool[source]

Check whether the suffix of the path is allowed.

Parameters
  • path_str (str) – Path to check.

  • allowed_suffix (List[str], optional) – What extension names are allowed. Offer a list like [‘.jpg’, ‘,jpeg’]. When it’s [], all will be received. Use [‘’] then directory is allowed. Defaults to [].

Returns

True: suffix test passed False: suffix test failed

Return type

bool

mmhuman3d.utils.collect_env()[source]

Collect the information of the running environments.

mmhuman3d.utils.compress_video(input_path: str, output_path: str, compress_rate: int = 1, down_sample_scale: Union[float, int] = 1, fps: int = 30, disable_log: bool = False)None[source]

Compress a video file.

Parameters
  • input_path (str) – input video file path.

  • output_path (str) – output video file path.

  • compress_rate (int, optional) – compress rate, influents the bit rate. Defaults to 1.

  • down_sample_scale (Union[float, int], optional) – spatial down sample scale. Defaults to 1.

  • fps (int, optional) – Frames per second. Defaults to 30.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None.

mmhuman3d.utils.conver_verts_to_cam_coord(verts, pred_cams, bboxes_xy, focal_length=5000.0, bbox_scale_factor=1.25, bbox_format='xyxy')[source]

Convert vertices from the world coordinate to camera coordinate.

Parameters
  • verts ([np.ndarray]) – The vertices in the world coordinate. The shape is (frame,num_person,6890,3) or (frame,6890,3).

  • pred_cams ([np.ndarray]) – Camera parameters estimated by HMR or SPIN. The shape is (frame,num_person,3) or (frame,6890,3).

  • bboxes_xy ([np.ndarray]) – (frame, num_person, 4|5) or (frame, 4|5)

  • focal_length ([float],optional) – Defined same as your training.

  • bbox_scale_factor (float) – scale factor for expanding the bbox.

  • bbox_format (Literal['xyxy', 'xywh']) – ‘xyxy’ means the left-up point and right-bottomn point of the bbox. ‘xywh’ means the left-up point and the width and height of the bbox.

Returns

The vertices in the camera coordinate.

The shape is (frame,num_person,6890,3) or (frame,6890,3).

np.ndarray: The intrinsic parameters of the pred_cam.

The shape is (num_frame, 3, 3).

Return type

np.ndarray

mmhuman3d.utils.convert_bbox_to_intrinsic(bboxes: numpy.ndarray, img_width: int = 224, img_height: int = 224, bbox_scale_factor: float = 1.25, bbox_format: typing_extensions.Literal[xyxy, xywh] = 'xyxy')[source]

Convert bbox to intrinsic parameters.

Parameters
  • bbox (np.ndarray) – (frame, num_person, 4) or (frame, 4)

  • img_width (int) – image width of training data.

  • img_height (int) – image height of training data.

  • bbox_scale_factor (float) – scale factor for expanding the bbox.

  • bbox_format (Literal['xyxy', 'xywh']) – ‘xyxy’ means the left-up point and right-bottomn point of the bbox. ‘xywh’ means the left-up point and the width and height of the bbox.

Returns

(frame, num_person, 3, 3) or (frame, 3, 3)

Return type

np.ndarray

mmhuman3d.utils.convert_crop_cam_to_orig_img(cam: numpy.ndarray, bbox: numpy.ndarray, img_width: int, img_height: int, aspect_ratio: float = 1.0, bbox_scale_factor: float = 1.25, bbox_format: typing_extensions.Literal[xyxy, xywh, cs] = 'xyxy')[source]

This function is modified from [VIBE](https://github.com/ mkocabas/VIBE/blob/master/lib/utils/demo_utils.py#L242-L259). Original license please see docs/additional_licenses.md.

Parameters
  • cam (np.ndarray) – cam (ndarray, shape=(frame, 3) or

  • (frame

  • num_person

  • 3))

  • perspective camera in cropped img coordinates (weak) –

  • bbox (np.ndarray) – bbox coordinates

  • img_width (int) – original image width

  • img_height (int) – original image height

  • aspect_ratio (float, optional) – Defaults to 1.0.

  • bbox_scale_factor (float, optional) – Defaults to 1.25.

  • bbox_format (Literal['xyxy', 'xywh', 'cs']) – Defaults to ‘xyxy’. ‘xyxy’ means the left-up point and right-bottomn point of the bbox. ‘xywh’ means the left-up point and the width and height of the bbox. ‘cs’ means the center of the bbox (x,y) and the scale of the bbox w & h.

Returns

shape = (frame, 4) or (frame, num_person, 4)

Return type

orig_cam

mmhuman3d.utils.convert_kp2d_to_bbox(kp2d: numpy.ndarray, bbox_format: typing_extensions.Literal[xyxy, xywh] = 'xyxy')numpy.ndarray[source]

Convert kp2d to bbox.

Parameters
  • kp2d (np.ndarray) – shape should be (num_frame, num_points, 2/3) or (num_frame, num_person, num_points, 2/3).

  • bbox_format (Literal['xyxy', 'xywh'], optional) – Defaults to ‘xyxy’.

Returns

shape will be (num_frame, num_person, 4)

Return type

np.ndarray

mmhuman3d.utils.crop_video(input_path: str, output_path: str, box: Optional[Union[List[int], Tuple[int, int, int, int]]] = None, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Spatially or temporally crop a video or gif file.

Parameters
  • input_path (str) – input video or gif file path.

  • output_path (str) – output video or gif file path.

  • box (Iterable[int], optional) – [x, y of the crop region left. corner and width and height]. Defaults to [0, 0, 100, 100].

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to None.

  • int] – optional): (height, width) of output. Defaults to None.

  • Tuple[float – optional): (height, width) of output. Defaults to None.

  • float]]] – optional): (height, width) of output. Defaults to None.

:param : optional): (height, width) of output. Defaults to None. :param disable_log: whether close the ffmepg command info.

Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None’-start_number’, f’{start}’,

mmhuman3d.utils.ee_to_aa(euler_angle: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert euler angles to axis angles.

Parameters
  • euler_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.ee_to_quat(euler_angle: Union[torch.Tensor, numpy.ndarray], convention='xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert euler angles to quaternions.

Parameters
  • euler_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 4).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.ee_to_rot6d(euler_angle: Union[torch.Tensor, numpy.ndarray], convention='xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert euler angles to rotation 6d representation.

Parameters
  • euler_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 6).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.ee_to_rotmat(euler_angle: Union[torch.Tensor, numpy.ndarray], convention='xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert euler angle to rotation matrixs.

Parameters
  • euler_angle (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.estimate_translation(S, joints_2d, focal_length=5000.0, img_size=224.0)[source]

Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.

Input:

S: (B, 49, 3) 3D joint locations joints: (B, 49, 3) 2D joint locations and confidence

Returns

(B, 3) camera translation vectors

mmhuman3d.utils.estimate_translation_np(S, joints_2d, joints_conf, focal_length=5000, img_size=224)[source]

Find camera translation that brings 3D joints S closest to 2D the corresponding joints_2d.

Input:

S: (25, 3) 3D joint locations joints: (25, 3) 2D joint locations and confidence

Returns

(3,) camera translation vector

mmhuman3d.utils.get_default_hmr_intrinsic(num_frame=1, focal_length=1000, det_width=224, det_height=224)numpy.ndarray[source]

Get default hmr intrinsic, defined by how you trained.

Parameters
  • num_frame (int, optional) – num of frames. Defaults to 1.

  • focal_length (int, optional) – defined same as your training. Defaults to 1000.

  • det_width (int, optional) – the size you used to detect. Defaults to 224.

  • det_height (int, optional) – the size you used to detect. Defaults to 224.

Returns

shape of (N, 3, 3)

Return type

np.ndarray

mmhuman3d.utils.get_different_colors(number_of_colors, flag=0, alpha: float = 1.0, mode: str = 'bgr', int_dtype: bool = True)[source]

Get a numpy of colors of shape (N, 3).

mmhuman3d.utils.gif_to_images(input_path: str, output_folder: str, fps: int = 30, img_format: str = '%06d.png', resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Convert a gif file to a folder of images.

Parameters
  • input_path (str) – input gif file path.

  • output_folder (str) – output folder to save the images.

  • fps (int, optional) – fps. Defaults to 30.

  • img_format (str, optional) – output image name format. Defaults to ‘%06d.png’.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to None.

  • int] – optional): (height, width) of output. Defaults to None.

  • Tuple[float – optional): (height, width) of output. Defaults to None.

  • float]]] – optional): (height, width) of output. Defaults to None.

:paramoptional): (height, width) of output.

Defaults to None.

Parameters

disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None

mmhuman3d.utils.gif_to_video(input_path: str, output_path: str, fps: int = 30, remove_raw_file: bool = False, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Convert a gif file to a video.

Parameters
  • input_path (str) – input gif file path.

  • output_path (str) – output video file path.

  • fps (int, optional) – fps. Defaults to 30.

  • remove_raw_file (bool, optional) – whether remove original input file. Defaults to False.

  • down_sample_scale (Union[int, float], optional) – down sample scale. Defaults to 1.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to None.

  • int] – optional): (height, width) of output. Defaults to None.

  • Tuple[float – optional): (height, width) of output. Defaults to None.

  • float]]] – optional): (height, width) of output. Defaults to None.

:param : optional): (height, width) of output. Defaults to None. :param disable_log: whether close the ffmepg command info.

Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None

mmhuman3d.utils.images_to_array(input_folder: str, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, img_format: str = '%06d.png', start: int = 0, end: Optional[int] = None, remove_raw_files: bool = False, disable_log: bool = False)numpy.ndarray[source]

Read a folder of images as an array of (f * h * w * 3).

Parameters
  • input_folder (str) – folder of input images.

  • (Optional[Union[Tuple[int (resolution) – resolution(height, width) of output. Defaults to None.

  • int] – resolution(height, width) of output. Defaults to None.

  • Tuple[float – resolution(height, width) of output. Defaults to None.

  • float]]] – resolution(height, width) of output. Defaults to None.

  • img_format (str, optional) – format of images to be read. Defaults to ‘%06d.png’.

  • start (int, optional) –

    start frame index. Inclusive.

    If < 0, will be converted to frame_index range in [0, frame_num].

    Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • remove_raw_files (bool, optional) – whether remove raw images. Defaults to False.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises

FileNotFoundError – check the input path.

Returns

shape will be (f * h * w * 3).

Return type

np.ndarray

mmhuman3d.utils.images_to_gif(input_folder: str, output_path: str, remove_raw_file: bool = False, img_format: str = '%06d.png', fps: int = 15, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, start: int = 0, end: Optional[int] = None, disable_log: bool = False)None[source]

Convert series of images to a video, similar to images_to_video, but provide more suitable parameters.

Parameters
  • input_folder (str) – input image folder.

  • output_path (str) – output gif file path.

  • remove_raw_file (bool, optional) – whether remove raw images. Defaults to False.

  • img_format (str, optional) – format to name the images. Defaults to ‘%06d.png’.

  • fps (int, optional) – output video fps. Defaults to 15.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to None.

  • int] – optional): (height, width) of output. Defaults to None.

  • Tuple[float – optional): (height, width) of output. Defaults to None.

  • float]]] – optional): (height, width) of output. Defaults to None.

:param : optional): (height, width) of output. Defaults to None. :param start: start frame index. Inclusive.

If < 0, will be converted to frame_index range in [0, frame_num]. Defaults to 0.

Parameters
  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None

mmhuman3d.utils.images_to_sorted_images(input_folder, output_folder, img_format='%06d')[source]

Copy and rename a folder of images into a new folder following the img_format.

Parameters
  • input_folder (str) – input folder.

  • output_folder (str) – output folder.

  • img_format (str, optional) – image format name, do not need extension. Defaults to ‘%06d’.

Returns

image format of the rename images.

Return type

str

mmhuman3d.utils.images_to_video(input_folder: str, output_path: str, remove_raw_file: bool = False, img_format: str = '%06d.png', fps: Union[int, float] = 30, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, start: int = 0, end: Optional[int] = None, disable_log: bool = False)None[source]

Convert a folder of images to a video.

Parameters
  • input_folder (str) – input image folder

  • output_path (str) – output video file path

  • remove_raw_file (bool, optional) – whether remove raw images. Defaults to False.

  • img_format (str, optional) – format to name the images]. Defaults to ‘%06d.png’.

  • fps (Union[int, float], optional) – output video fps. Defaults to 30.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. defaults to None.

  • int] – optional): (height, width) of output. defaults to None.

  • Tuple[float – optional): (height, width) of output. defaults to None.

  • float]]] – optional): (height, width) of output. defaults to None.

:paramoptional): (height, width) of output.

defaults to None.

Parameters
  • start (int, optional) – start frame index. Inclusive. If < 0, will be converted to frame_index range in [0, frame_num]. Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None

mmhuman3d.utils.join_batch_meshes_as_scene(meshes: List[pytorch3d.structures.Meshes], include_textures: bool = True)pytorch3d.structures.Meshes[source]

Join meshes as a scene each batch. Only for pytorch3d meshes. The Meshes must share the same batch size, and arbitrary topology. They must all be on the same device. If include_textures is true, they must all be compatible, either all or none having textures, and all the Textures objects being the same type. If include_textures is False, textures are ignored. If not, ValueError would be raised in join_meshes_as_batch and join_meshes_as_scene.

Parameters
  • meshes (List[Meshes]) – A list of Meshes with the same batches. Required.

  • include_textures – (bool) whether to try to join the textures.

Returns

New Meshes which has join different Meshes by each batch.

mmhuman3d.utils.mesh_to_pointcloud_vc(meshes: pytorch3d.structures.Meshes, include_textures: bool = True, alpha: float = 1.0)pytorch3d.structures.Pointclouds[source]

Convert pytorch3d Meshes to PointClouds.

Parameters
  • meshes (Meshes) – input meshes.

  • include_textures (bool, optional) – Whether include colors. Require the texture of input meshes is vertex color. Defaults to True.

  • alpha (float, optional) – transparency. Defaults to 1.0.

Returns

output pointclouds.

Return type

Pointclouds

mmhuman3d.utils.pad_for_libx264(image_array)[source]

Pad zeros if width or height of image_array is not divisible by 2. Otherwise you will get.

“[libx264 @ 0x1b1d560] width not divisible by 2 “

Parameters

image_array (np.ndarray) – Image or images load by cv2.imread(). Possible shapes: 1. [height, width] 2. [height, width, channels] 3. [images, height, width] 4. [images, height, width, channels]

Returns

A image with both edges divisible by 2.

Return type

np.ndarray

mmhuman3d.utils.perspective_projection(points, rotation, translation, focal_length, camera_center)[source]

This function computes the perspective projection of a set of points.

Input:

points (bs, N, 3): 3D points rotation (bs, 3, 3): Camera rotation translation (bs, 3): Camera translation focal_length (bs,) or scalar: Focal length camera_center (bs, 2): Camera center

mmhuman3d.utils.prepare_frames(input_path=None)[source]

Prepare frames from input_path.

Parameters

input_path (str, optional) – Defaults to None.

Raises

ValueError – check the input path.

Returns

prepared frames

Return type

List[np.ndarray]

mmhuman3d.utils.prepare_output_path(output_path: str, allowed_suffix: List[str] = [], tag: str = 'output file', path_type: typing_extensions.Literal[file, dir, auto] = 'auto', overwrite: bool = True)None[source]

Check output folder or file.

Parameters
  • output_path (str) – could be folder or file.

  • allowed_suffix (List[str], optional) – Check the suffix of output_path. If folder, should be [] or [‘’]. If could both be folder or file, should be [suffixs…, ‘’]. Defaults to [].

  • tag (str, optional) – The string tag to specify the output type. Defaults to ‘output file’.

  • path_type (Literal[, optional) – Choose file for file and dir for folder. Choose auto if allowed to be both. Defaults to ‘auto’.

  • overwrite (bool, optional) – Whether overwrite the existing file or folder. Defaults to True.

Raises
  • FileNotFoundError – suffix does not match.

  • FileExistsError – file or folder already exists and overwrite is False.

Returns

None

mmhuman3d.utils.process_mmdet_results(mmdet_results, cat_id=1)[source]

Process mmdet results, and return a list of bboxes.

Parameters
  • mmdet_results (list|tuple) – mmdet results.

  • cat_id (int) – category id (default: 1 for human)

Returns

a list of detected bounding boxes

Return type

person_results (list)

mmhuman3d.utils.process_mmtracking_results(mmtracking_results, max_track_id)[source]

Process mmtracking results.

Parameters

mmtracking_results ([list]) – mmtracking_results.

Returns

a list of tracked bounding boxes

Return type

list

mmhuman3d.utils.quat_to_aa(quaternions: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert quaternions to axis angles.

Parameters

quaternions (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.quat_to_ee(quaternions: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert quaternions to euler angles.

Parameters
  • quaternions (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 4). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.quat_to_rot6d(quaternions: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert quaternions to rotation 6d representations.

Parameters

quaternions (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 4). ndim of input is unlimited.

Returns

shape would be (…, 6).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.quat_to_rotmat(quaternions: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert quaternions to rotation matrixs.

Parameters

quaternions (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3). ndim of input is unlimited.

Returns

shape would be (…, 3, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.quaternion_to_angle_axis(quaternion: torch.Tensor)torch.Tensor[source]

This function is borrowed from https://github.com/kornia/kornia Convert quaternion vector to angle axis of rotation. Adapted from ceres C++ library: ceres-solver/include/ceres/rotation.h :param quaternion: tensor with quaternions. :type quaternion: torch.Tensor

Returns

tensor with angle axis of rotation.

Return type

torch.Tensor

Shape:
  • Input: \((*, 4)\) where * means, any number of dimensions

  • Output: \((*, 3)\)

Example

>>> quaternion = torch.rand(2, 4)  # Nx4
>>> angle_axis = tgm.quaternion_to_angle_axis(quaternion)  # Nx3
mmhuman3d.utils.rot6d_to_aa(rotation_6d: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation 6d representations to axis angles.

Parameters

rotation_6d (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 6). ndim of input is unlimited.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.rot6d_to_ee(rotation_6d: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert rotation 6d representations to euler angles.

Parameters

rotation_6d (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 6). ndim of input is unlimited.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.rot6d_to_quat(rotation_6d: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation 6d representations to quaternions.

Parameters

rotation (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 6). ndim of input is unlimited.

Returns

shape would be (…, 4).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.rot6d_to_rotmat(rotation_6d: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation 6d representations to rotation matrixs.

Parameters

rotation_6d (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 6). ndim of input is unlimited.

Returns

shape would be (…, 3, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.rotation_matrix_to_angle_axis(rotation_matrix)[source]

This function is borrowed from https://github.com/kornia/kornia Convert 3x4 rotation matrix to Rodrigues vector :param rotation_matrix: rotation matrix. :type rotation_matrix: Tensor

Returns

Rodrigues vector transformation.

Return type

Tensor

Shape:
  • Input: \((N, 3, 4)\)

  • Output: \((N, 3)\)

Example

>>> input = torch.rand(2, 3, 4)  # Nx3x4
>>> output = tgm.rotation_matrix_to_angle_axis(input)  # Nx3
mmhuman3d.utils.rotation_matrix_to_quaternion(rotation_matrix, eps=1e-06)[source]

This function is borrowed from https://github.com/kornia/kornia Convert 3x4 rotation matrix to 4d quaternion vector This algorithm is based on algorithm described in https://github.com/KieranWynn/pyquaternion/blob/master/pyquaternion/quaternion.py#L201 :param rotation_matrix: the rotation matrix to convert. :type rotation_matrix: Tensor

Returns

the rotation in quaternion

Return type

Tensor

Shape:
  • Input: \((N, 3, 4)\)

  • Output: \((N, 4)\)

Example

>>> input = torch.rand(4, 3, 4)  # Nx3x4
>>> output = tgm.rotation_matrix_to_quaternion(input)  # Nx4
mmhuman3d.utils.rotmat_to_aa(matrix: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation matrixs to axis angles.

Parameters
  • matrix (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.rotmat_to_ee(matrix: Union[torch.Tensor, numpy.ndarray], convention: str = 'xyz')Union[torch.Tensor, numpy.ndarray][source]

Convert rotation matrixs to euler angle.

Parameters
  • matrix (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3, 3). ndim of input is unlimited.

  • convention (str, optional) – Convention string of three letters from {“x”, “y”, and “z”}. Defaults to ‘xyz’.

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.rotmat_to_quat(matrix: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation matrixs to quaternions.

Parameters

matrix (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3, 3). ndim of input is unlimited.

Returns

shape would be (…, 4).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.rotmat_to_rot6d(matrix: Union[torch.Tensor, numpy.ndarray])Union[torch.Tensor, numpy.ndarray][source]

Convert rotation matrixs to rotation 6d representations.

Parameters

matrix (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 3, 3). ndim of input is unlimited.

Returns

shape would be (…, 6).

Return type

Union[torch.Tensor, numpy.ndarray]

[1] Zhou, Y., Barnes, C., Lu, J., Yang, J., & Li, H. On the Continuity of Rotation Representations in Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition, 2019. Retrieved from http://arxiv.org/abs/1812.07035

mmhuman3d.utils.save_meshes_as_objs(meshes: Optional[pytorch3d.structures.Meshes] = None, paths: List[str] = [])None[source]

Save meshes as .obj files. Mainly for uv texture meshes.

Parameters
  • meshes (Meshes, optional) – Defaults to None.

  • paths (List[str], optional) – Output .obj file list. Defaults to [].

mmhuman3d.utils.save_meshes_as_plys(meshes: Optional[pytorch3d.structures.Meshes] = None, verts: Optional[torch.Tensor] = None, faces: Optional[torch.Tensor] = None, verts_rgb: Optional[torch.Tensor] = None, paths: List[str] = [])None[source]

Save meshes as .ply files. Mainly for vertex color meshes.

Parameters
  • meshes (Meshes, optional) – higher priority than (verts & faces & verts_rgb). Defaults to None.

  • verts (torch.Tensor, optional) – lower priority than meshes. Defaults to None.

  • faces (torch.Tensor, optional) – lower priority than meshes. Defaults to None.

  • verts_rgb (torch.Tensor, optional) – lower priority than meshes. Defaults to None.

  • paths (List[str], optional) – Output .ply file list. Defaults to [].

mmhuman3d.utils.search_limbs(data_source: str, mask: Optional[Union[numpy.ndarray, tuple, list]] = None, keypoints_factory: dict = {'agora': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8'], 'coco': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip_extra', 'right_hip_extra', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'coco_wholebody': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'left_hand_root', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_hand_root', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky'], 'crowdpose': ['left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'head', 'neck'], 'gta': ['gta_head_top', 'head', 'neck', 'gta_right_clavicle', 'right_shoulder', 'right_elbow', 'right_wrist', 'gta_left_clavicle', 'left_shoulder', 'left_elbow', 'left_wrist', 'spine_2', 'gta_spine1', 'spine_1', 'pelvis', 'gta_spine4', 'right_hip', 'right_knee', 'right_ankle', 'left_hip', 'left_knee', 'left_ankle', 'gta_SKEL_ROOT', 'gta_FB_R_Brow_Out_000', 'left_foot', 'gta_MH_R_Elbow', 'left_thumb_2', 'left_thumb_3', 'left_ring_2', 'left_ring_3', 'left_pinky_2', 'left_pinky_3', 'left_index_2', 'left_index_3', 'left_middle_2', 'left_middle_3', 'gta_RB_L_ArmRoll', 'gta_IK_R_Hand', 'gta_RB_R_ThighRoll', 'gta_FB_R_Lip_Corner_000', 'gta_SKEL_Pelvis', 'gta_IK_Head', 'gta_MH_R_Knee', 'gta_FB_LowerLipRoot_000', 'gta_FB_R_Lip_Top_000', 'gta_FB_R_CheekBone_000', 'gta_FB_UpperLipRoot_000', 'gta_FB_L_Lip_Top_000', 'gta_FB_LowerLip_000', 'right_foot', 'gta_FB_L_CheekBone_000', 'gta_MH_L_Elbow', 'gta_RB_L_ThighRoll', 'gta_PH_R_Foot', 'left_eye', 'gta_SKEL_L_Finger00', 'left_index_1', 'left_middle_1', 'left_ring_1', 'left_pinky_1', 'right_eye', 'gta_PH_R_Hand', 'gta_FB_L_Lip_Corner_000', 'gta_IK_R_Foot', 'gta_RB_Neck_1', 'gta_IK_L_Hand', 'gta_RB_R_ArmRoll', 'gta_FB_Brow_Centre_000', 'gta_FB_R_Lid_Upper_000', 'gta_RB_R_ForeArmRoll', 'gta_FB_L_Lid_Upper_000', 'gta_MH_L_Knee', 'gta_FB_Jaw_000', 'gta_FB_L_Lip_Bot_000', 'gta_FB_Tongue_000', 'gta_FB_R_Lip_Bot_000', 'gta_IK_Root', 'gta_PH_L_Foot', 'gta_FB_L_Brow_Out_000', 'gta_SKEL_R_Finger00', 'right_index_1', 'right_middle_1', 'right_ring_1', 'right_pinky_1', 'gta_PH_L_Hand', 'gta_RB_L_ForeArmRoll', 'gta_FB_UpperLip_000', 'right_thumb_2', 'right_thumb_3', 'right_ring_2', 'right_ring_3', 'right_pinky_2', 'right_pinky_3', 'right_index_2', 'right_index_3', 'right_middle_2', 'right_middle_3', 'gta_FACIAL_facialRoot', 'gta_IK_L_Foot', 'nose'], 'h36m': ['pelvis_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_hip_extra', 'right_knee', 'right_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'h36m_mmpose': ['pelvis_extra', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'spine_extra', 'neck_extra', 'head_extra', 'headtop', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_shoulder', 'right_elbow', 'right_wrist'], 'human_data': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'spine_4_3dhp', 'left_clavicle_3dhp', 'right_clavicle_3dhp', 'left_hand_3dhp', 'right_hand_3dhp', 'left_toe_3dhp', 'right_toe_3dhp', 'head_h36m', 'headtop_h36m', 'head_bottom_pt', 'left_hand', 'right_hand'], 'hybrik_29': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'jaw', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_thumb', 'right_thumb', 'head', 'left_middle', 'right_middle', 'left_bigtoe', 'right_bigtoe'], 'hybrik_hp3d': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis', 'neck', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'instavariety': ['right_heel_openpose', 'right_knee_openpose', 'right_hip_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_heel_openpose', 'right_wrist_openpose', 'right_elbow_openpose', 'right_shoulder_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'neck_openpose', 'headtop', 'nose_openpose', 'left_eye_openpose', 'right_eye_openpose', 'left_ear_openpose', 'right_ear_openpose', 'left_bigtoe_openpose', 'right_bigtoe_openpose', 'left_smalltoe_openpose', 'right_smalltoe_openpose', 'left_ankle_openpose', 'right_ankle_openpose'], 'lsp': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop'], 'mpi_inf_3dhp': ['spine_3', 'spine_4_3dhp', 'spine_2', 'spine_extra', 'pelvis_extra', 'neck_extra', 'head_extra', 'headtop', 'left_clavicle_3dhp', 'left_shoulder', 'left_elbow', 'left_wrist', 'left_hand_3dhp', 'right_clavicle_3dhp', 'right_shoulder', 'right_elbow', 'right_wrist', 'right_hand_3dhp', 'left_hip_extra', 'left_knee', 'left_ankle', 'left_foot', 'left_toe_3dhp', 'right_hip_extra', 'right_knee', 'right_ankle', 'right_foot', 'right_toe_3dhp'], 'mpi_inf_3dhp_test': ['headtop', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'spine_extra', 'head_extra'], 'mpii': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'pelvis_extra', 'thorax_extra', 'neck_extra', 'headtop', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist'], 'openpose_135': ['nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle', 'neck', 'head', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'left_thumb', 'left_index_1', 'left_index_2', 'left_index_3', 'left_index', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_middle', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_ring', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_pinky', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'right_thumb', 'right_index_1', 'right_index_2', 'right_index_3', 'right_index', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_middle', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_ring', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_pinky', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'right_eyeball', 'left_eyeball'], 'openpose_25': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose'], 'penn_action': ['head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'posetrack': ['nose', 'head_bottom_pt', 'headtop', 'left_ear', 'right_ear', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hip', 'right_hip', 'left_knee', 'right_knee', 'left_ankle', 'right_ankle'], 'pw3d': ['nose', 'neck_extra', 'right_shoulder', 'right_elbow', 'right_wrist', 'left_shoulder', 'left_elbow', 'left_wrist', 'right_hip_extra', 'right_knee', 'right_ankle', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_eye', 'left_eye', 'right_ear', 'left_ear'], 'smpl': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand'], 'smpl_24': ['right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_45': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky'], 'smpl_49': ['nose_openpose', 'neck_openpose', 'right_shoulder_openpose', 'right_elbow_openpose', 'right_wrist_openpose', 'left_shoulder_openpose', 'left_elbow_openpose', 'left_wrist_openpose', 'pelvis_openpose', 'right_hip_openpose', 'right_knee_openpose', 'right_ankle_openpose', 'left_hip_openpose', 'left_knee_openpose', 'left_ankle_openpose', 'right_eye_openpose', 'left_eye_openpose', 'right_ear_openpose', 'left_ear_openpose', 'left_bigtoe_openpose', 'left_smalltoe_openpose', 'left_heel_openpose', 'right_bigtoe_openpose', 'right_smalltoe_openpose', 'right_heel_openpose', 'right_ankle', 'right_knee', 'right_hip_extra', 'left_hip_extra', 'left_knee', 'left_ankle', 'right_wrist', 'right_elbow', 'right_shoulder', 'left_shoulder', 'left_elbow', 'left_wrist', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra', 'nose', 'left_eye', 'right_eye', 'left_ear', 'right_ear'], 'smpl_54': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'left_hand', 'right_hand', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_hip_extra', 'left_hip_extra', 'neck_extra', 'headtop', 'pelvis_extra', 'thorax_extra', 'spine_extra', 'jaw_extra', 'head_extra'], 'smplx': ['pelvis', 'left_hip', 'right_hip', 'spine_1', 'left_knee', 'right_knee', 'spine_2', 'left_ankle', 'right_ankle', 'spine_3', 'left_foot', 'right_foot', 'neck', 'left_collar', 'right_collar', 'head', 'left_shoulder', 'right_shoulder', 'left_elbow', 'right_elbow', 'left_wrist', 'right_wrist', 'jaw', 'left_eyeball', 'right_eyeball', 'left_index_1', 'left_index_2', 'left_index_3', 'left_middle_1', 'left_middle_2', 'left_middle_3', 'left_pinky_1', 'left_pinky_2', 'left_pinky_3', 'left_ring_1', 'left_ring_2', 'left_ring_3', 'left_thumb_1', 'left_thumb_2', 'left_thumb_3', 'right_index_1', 'right_index_2', 'right_index_3', 'right_middle_1', 'right_middle_2', 'right_middle_3', 'right_pinky_1', 'right_pinky_2', 'right_pinky_3', 'right_ring_1', 'right_ring_2', 'right_ring_3', 'right_thumb_1', 'right_thumb_2', 'right_thumb_3', 'nose', 'right_eye', 'left_eye', 'right_ear', 'left_ear', 'left_bigtoe', 'left_smalltoe', 'left_heel', 'right_bigtoe', 'right_smalltoe', 'right_heel', 'left_thumb', 'left_index', 'left_middle', 'left_ring', 'left_pinky', 'right_thumb', 'right_index', 'right_middle', 'right_ring', 'right_pinky', 'right_eyebrow_1', 'right_eyebrow_2', 'right_eyebrow_3', 'right_eyebrow_4', 'right_eyebrow_5', 'left_eyebrow_5', 'left_eyebrow_4', 'left_eyebrow_3', 'left_eyebrow_2', 'left_eyebrow_1', 'nosebridge_1', 'nosebridge_2', 'nosebridge_3', 'nosebridge_4', 'nose_1', 'nose_2', 'nose_3', 'nose_4', 'nose_5', 'right_eye_1', 'right_eye_2', 'right_eye_3', 'right_eye_4', 'right_eye_5', 'right_eye_6', 'left_eye_4', 'left_eye_3', 'left_eye_2', 'left_eye_1', 'left_eye_6', 'left_eye_5', 'mouth_1', 'mouth_2', 'mouth_3', 'mouth_4', 'mouth_5', 'mouth_6', 'mouth_7', 'mouth_8', 'mouth_9', 'mouth_10', 'mouth_11', 'mouth_12', 'lip_1', 'lip_2', 'lip_3', 'lip_4', 'lip_5', 'lip_6', 'lip_7', 'lip_8', 'face_contour_1', 'face_contour_2', 'face_contour_3', 'face_contour_4', 'face_contour_5', 'face_contour_6', 'face_contour_7', 'face_contour_8', 'face_contour_9', 'face_contour_10', 'face_contour_11', 'face_contour_12', 'face_contour_13', 'face_contour_14', 'face_contour_15', 'face_contour_16', 'face_contour_17']})Tuple[dict, dict][source]

Search the corresponding limbs following the basis human_data limbs. The mask could mask out the incorrect keypoints.

Parameters
  • data_source (str) – data source type.

  • mask (Optional[Union[np.ndarray, tuple, list]], optional) – refer to keypoints_mapping. Defaults to None.

  • keypoints_factory (dict, optional) – Dict of all the conventions. Defaults to KEYPOINTS_FACTORY.

Returns

(limbs_target, limbs_palette).

Return type

Tuple[dict, dict]

mmhuman3d.utils.sja_to_aa(sja: Union[torch.Tensor, numpy.ndarray], R_t: Union[torch.Tensor, numpy.ndarray] = tensor([[[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[1.0, 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[0.0, 0.0, - 1.0], [0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]]]), R_t_inv: Union[torch.Tensor, numpy.ndarray] = tensor([[[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, - 0.0, 0.0], [0.0, 0.0, - 1.0], [0.0, 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 1.0, - 0.0], [0.0, 0.0, 1.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[1.0, 0.0, - 0.0], [0.0, 0.0, 1.0], [0.0, - 1.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]], [[0.0, - 0.0, 1.0], [0.0, 1.0, 0.0], [- 1.0, 0.0, 0.0]], [[- 0.0, 0.0, - 1.0], [- 0.0, 1.0, 0.0], [1.0, 0.0, 0.0]]]))Union[torch.Tensor, numpy.ndarray][source]

Convert standard joint angles to axis angles.

Parameters
  • sja (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3). ndim of input is unlimited.

  • R_t (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3, 3). Transformation matrices from original axis-angle coordinate system to standard joint angle coordinate system

  • R_t_inv (Union[torch.Tensor, numpy.ndarray]) – input shape should be (…, 21, 3, 3). Transformation matrices from standard joint angle coordinate system to original axis-angle coordinate system

Returns

shape would be (…, 3).

Return type

Union[torch.Tensor, numpy.ndarray]

mmhuman3d.utils.slice_video(input_path: str, output_path: str, start: int = 0, end: Optional[int] = None, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, disable_log: bool = False)None[source]

Temporally crop a video/gif into another video/gif.

Parameters
  • input_path (str) – input video or gif file path.

  • output_path (str) – output video of gif file path.

  • start (int, optional) – start frame index. Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to None.

  • int] – optional): (height, width) of output. Defaults to None.

  • Tuple[float – optional): (height, width) of output. Defaults to None.

  • float]]] – optional): (height, width) of output. Defaults to None.

:param : optional): (height, width) of output. Defaults to None. :param disable_log: whether close the ffmepg command info.

Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

NoReturn

mmhuman3d.utils.smooth_process(x, smooth_type='savgol')[source]

Smooth the array with the specified smoothing type.

Parameters
  • x (np.ndarray) – Shape should be (frame,num_person,K,C) or (frame,K,C).

  • smooth_type (str, optional) – Smooth type. choose in [‘oneeuro’, ‘gaus1d’, ‘savgol’]. Defaults to ‘savgol’.

Raises

ValueError – check the input smoothing type.

Returns

Smoothed data. The shape should be

(frame,num_person,K,C) or (frame,K,C).

Return type

np.ndarray

mmhuman3d.utils.spatial_concat_video(input_path_list: List[str], output_path: str, array: List[int] = [1, 1], direction: typing_extensions.Literal[h, w] = 'h', resolution: Union[Tuple[int, int], List[int], List[float], Tuple[float, float]] = (512, 512), remove_raw_files: bool = False, padding: int = 0, disable_log: bool = False)None[source]

Spatially concat some videos as an array video.

Parameters
  • input_path_list (list) – input video or gif file list.

  • output_path (str) – output video or gif file path.

  • array (List[int], optional) – line number and column number of the video array]. Defaults to [1, 1].

  • direction (str, optional) – [choose in ‘h’ or ‘v’, represent horizontal and vertical separately]. Defaults to ‘h’.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of output. Defaults to (512, 512).

  • int] – optional): (height, width) of output. Defaults to (512, 512).

  • Tuple[float – optional): (height, width) of output. Defaults to (512, 512).

  • float]]] – optional): (height, width) of output. Defaults to (512, 512).

:paramoptional): (height, width) of output.

Defaults to (512, 512).

Parameters
  • remove_raw_files (bool, optional) – whether remove raw images. Defaults to False.

  • padding (int, optional) – width of pixels between videos. Defaults to 0.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None

mmhuman3d.utils.temporal_concat_video(input_path_list: List[str], output_path: str, resolution: Union[Tuple[int, int], Tuple[float, float]] = (512, 512), remove_raw_files: bool = False, disable_log: bool = False)None[source]

Concat no matter videos or gifs into a temporal sequence, and save as a new video or gif file.

Parameters
  • input_path_list (List[str]) – list of input video paths.

  • output_path (str) – output video file path.

  • (Optional[Union[Tuple[int (resolution) – , optional): (height, width) of output]. Defaults to (512,512).

  • int] – , optional): (height, width) of output]. Defaults to (512,512).

  • Tuple[float – , optional): (height, width) of output]. Defaults to (512,512).

  • float]]] – , optional): (height, width) of output]. Defaults to (512,512).

  • remove_raw_files (bool, optional) – whether remove the input videos. Defaults to False.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None.

mmhuman3d.utils.video_to_array(input_path: str, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, start: int = 0, end: Optional[int] = None, disable_log: bool = False)numpy.ndarray[source]

Read a video/gif as an array of (f * h * w * 3).

Parameters
  • input_path (str) – input path.

  • (Optional[Union[Tuple[int (resolution) – optional): resolution(height, width) of output. Defaults to None.

  • int] – optional): resolution(height, width) of output. Defaults to None.

  • Tuple[float – optional): resolution(height, width) of output. Defaults to None.

  • float]]] – optional): resolution(height, width) of output. Defaults to None.

:paramoptional): resolution(height, width) of output.

Defaults to None.

Parameters
  • start (int, optional) –

    start frame index. Inclusive.

    If < 0, will be converted to frame_index range in [0, frame_num].

    Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises

FileNotFoundError – check the input path.

Returns

shape will be (f * h * w * 3).

Return type

np.ndarray

mmhuman3d.utils.video_to_gif(input_path: str, output_path: str, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, fps: Union[float, int] = 15, disable_log: bool = False)None[source]

Convert a video to a gif file.

Parameters
  • input_path (str) – video file path.

  • output_path (str) – gif file path.

  • (Optional[Union[Tuple[int (resolution) – optional): (height, width) of the output video. Defaults to None.

  • int] – optional): (height, width) of the output video. Defaults to None.

  • Tuple[float – optional): (height, width) of the output video. Defaults to None.

  • float]]] – optional): (height, width) of the output video. Defaults to None.

:paramoptional): (height, width) of the output video.

Defaults to None.

Parameters
  • fps (Union[float, int], optional) – frames per second. Defaults to 15.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path.

  • FileNotFoundError – check the output path.

Returns

None.

mmhuman3d.utils.video_to_images(input_path: str, output_folder: str, resolution: Optional[Union[Tuple[int, int], Tuple[float, float]]] = None, img_format: str = '%06d.png', start: int = 0, end: Optional[int] = None, disable_log: bool = False)None[source]

Convert a video to a folder of images.

Parameters
  • input_path (str) – video file path

  • output_folder (str) – output folder to store the images

  • resolution (Optional[Tuple[int, int]], optional) – (height, width) of output. defaults to None.

  • img_format (str, optional) – format of images to be read. Defaults to ‘%06d.png’.

  • start (int, optional) –

    start frame index. Inclusive.

    If < 0, will be converted to frame_index range in [0, frame_num].

    Defaults to 0.

  • end (int, optional) – end frame index. Exclusive. Could be positive int or negative int or None. If None, all frames from start till the last frame are included. Defaults to None.

  • disable_log (bool, optional) – whether close the ffmepg command info. Defaults to False.

Raises
  • FileNotFoundError – check the input path

  • FileNotFoundError – check the output path

Returns

None

mmhuman3d.utils.xywh2xyxy(bbox_xywh)[source]

Transform the bbox format from xywh to x1y1x2y2.

Parameters
  • bbox_xywh (np.ndarray) – Bounding boxes (with scores), shaped

  • (n (n, 5) (left, top, width, height, [score]) –

  • or (4)) –

Returns

Bounding boxes (with scores),

shaped (n, 4) or (n, 5). (left, top, right, bottom, [score])

Return type

np.ndarray

mmhuman3d.utils.xyxy2xywh(bbox_xyxy)[source]

Transform the bbox format from x1y1x2y2 to xywh.

Parameters

bbox_xyxy (np.ndarray) – Bounding boxes (with scores), shaped (n, 4) or (n, 5). (left, top, right, bottom, [score])

Returns

Bounding boxes (with scores),

shaped (n, 4) or (n, 5). (left, top, width, height, [score])

Return type

np.ndarray

Read the Docs v: latest
Versions
latest
stable
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.