重复标签# Index对象不需要是唯一的;您可以有重复的行或列标签。一开始这可能有点令人困惑。如果您熟悉 SQL,您就会知道行标签类似于表上的主键,并且您永远不会希望 SQL 表中出现重复项。但 pandas 的作用之一是在混乱的现实世界数据进入下游系统之前对其进行清理。现实世界的数据也有重复,即使在本应是唯一的字段中也是如此。 本节介绍重复标签如何更改某些操作的行为,以及如何防止在操作期间出现重复标签,或者在出现重复标签时进行检测。 In [1]: import pandas as pd In [2]: import numpy as np 重复标签的后果# 某些 pandas 方法(Series.reindex()例如)不适用于存在重复项的情况。输出无法确定,因此 pandas 引发。 In [3]: s1 = pd.Series([0, 1, 2], index=["a", "b", "b"]) In [4]: s1.reindex(["a", "b", "c"]) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[4], line 1 ----> 1 s1.reindex(["a", "b", "c"]) File ~/work/pandas/pandas/pandas/core/series.py:5153, in Series.reindex(self, index, axis, method, copy, level, fill_value, limit, tolerance) 5136 @doc( 5137 NDFrame.reindex, # type: ignore[has-type] 5138 klass=_shared_doc_kwargs["klass"], (...) 5151 tolerance=None, 5152 ) -> Series: -> 5153 return super().reindex( 5154 index=index, 5155 method=method, 5156 copy=copy, 5157 level=level, 5158 fill_value=fill_value, 5159 limit=limit, 5160 tolerance=tolerance, 5161 ) File ~/work/pandas/pandas/pandas/core/generic.py:5610, in NDFrame.reindex(self, labels, index, columns, axis, method, copy, level, fill_value, limit, tolerance) 5607 return self._reindex_multi(axes, copy, fill_value) 5609 # perform the reindex on the axes -> 5610 return self._reindex_axes( 5611 axes, level, limit, tolerance, method, fill_value, copy 5612 ).__finalize__(self, method="reindex") File ~/work/pandas/pandas/pandas/core/generic.py:5633, in NDFrame._reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy) 5630 continue 5632 ax = self._get_axis(a) -> 5633 new_index, indexer = ax.reindex( 5634 labels, level=level, limit=limit, tolerance=tolerance, method=method 5635 ) 5637 axis = self._get_axis_number(a) 5638 obj = obj._reindex_with_indexers( 5639 {axis: [new_index, indexer]}, 5640 fill_value=fill_value, 5641 copy=copy, 5642 allow_dups=False, 5643 ) File ~/work/pandas/pandas/pandas/core/indexes/base.py:4429, in Index.reindex(self, target, method, level, limit, tolerance) 4426 raise ValueError("cannot handle a non-unique multi-index!") 4427 elif not self.is_unique: 4428 # GH#42568 -> 4429 raise ValueError("cannot reindex on an axis with duplicate labels") 4430 else: 4431 indexer, _ = self.get_indexer_non_unique(target) ValueError: cannot reindex on an axis with duplicate labels 其他方法,例如索引,可以给出非常令人惊讶的结果。通常,使用标量进行索引会降低维度。DataFrame 用标量切片 a将返回 a Series。用标量对a 进行切片Series将返回标量。但对于重复项,情况并非如此。 In [5]: df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"]) In [6]: df1 Out[6]: A A B 0 0 1 2 1 3 4 5 我们在列中有重复项。如果我们切片'B',我们会得到一个Series In [7]: df1["B"] # a series Out[7]: 0 2 1 5 Name: B, dtype: int64 但是切片'A'会返回一个DataFrame In [8]: df1["A"] # a DataFrame Out[8]: A A 0 0 1 1 3 4 这也适用于行标签 In [9]: df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"]) In [10]: df2 Out[10]: A a 0 a 1 b 2 In [11]: df2.loc["b", "A"] # a scalar Out[11]: 2 In [12]: df2.loc["a", "A"] # a Series Out[12]: a 0 a 1 Name: A, dtype: int64 重复标签检测# 您可以Index使用以下命令检查(存储行或列标签)是否唯一Index.is_unique: In [13]: df2 Out[13]: A a 0 a 1 b 2 In [14]: df2.index.is_unique Out[14]: False In [15]: df2.columns.is_unique Out[15]: True 笔记 对于大型数据集来说,检查索引是否唯一的成本有点高。 pandas 确实缓存了这个结果,因此重新检查相同的索引非常快。 Index.duplicated()将返回一个布尔ndarray,指示标签是否重复。 In [16]: df2.index.duplicated() Out[16]: array([False, True, False]) 它可以用作布尔过滤器来删除重复的行。 In [17]: df2.loc[~df2.index.duplicated(), :] Out[17]: A a 0 b 2 如果您需要额外的逻辑来处理重复的标签,而不仅仅是删除重复,那么groupby()在索引上使用是一个常见的技巧。例如,我们将通过取具有相同标签的所有行的平均值来解决重复项。 In [18]: df2.groupby(level=0).mean() Out[18]: A a 0.5 b 2.0 禁止重复标签# 1.2.0 版本中的新增内容。 如上所述,读取原始数据时处理重复项是一个重要功能。也就是说,您可能希望避免在数据处理管道中引入重复项(来自pandas.concat()、 rename()等方法)。Series和都通过调用DataFrame 来禁止重复标签.set_flags(allows_duplicate_labels=False)。 (默认是允许它们)。如果存在重复的标签,则会引发异常。 In [19]: pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[19], line 1 ----> 1 pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False) File ~/work/pandas/pandas/pandas/core/generic.py:508, in NDFrame.set_flags(self, copy, allows_duplicate_labels) 506 df = self.copy(deep=copy and not using_copy_on_write()) 507 if allows_duplicate_labels is not None: --> 508 df.flags["allows_duplicate_labels"] = allows_duplicate_labels 509 return df File ~/work/pandas/pandas/pandas/core/flags.py:109, in Flags.__setitem__(self, key, value) 107 if key not in self._keys: 108 raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}") --> 109 setattr(self, key, value) File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value) 94 if not value: 95 for ax in obj.axes: ---> 96 ax._maybe_check_unique() 98 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self) 712 duplicates = self._format_duplicate_message() 713 msg += f"\n{duplicates}" --> 715 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label b [1, 2] 这适用于行和列标签DataFrame In [20]: pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags( ....: allows_duplicate_labels=False ....: ) ....: Out[20]: A B C 0 0 1 2 1 3 4 5 可以使用 来检查或设置该属性allows_duplicate_labels,它指示该对象是否可以具有重复的标签。 In [21]: df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags( ....: allows_duplicate_labels=False ....: ) ....: In [22]: df Out[22]: A x 0 y 1 X 2 Y 3 In [23]: df.flags.allows_duplicate_labels Out[23]: False DataFrame.set_flags()可用于返回一个DataFrame带有属性的新值,例如allows_duplicate_labels设置为某个值 In [24]: df2 = df.set_flags(allows_duplicate_labels=True) In [25]: df2.flags.allows_duplicate_labels Out[25]: True 返回的新数据DataFrame是与旧数据相同的数据的视图DataFrame。或者可以直接在同一个对象上设置属性 In [26]: df2.flags.allows_duplicate_labels = False In [27]: df2.flags.allows_duplicate_labels Out[27]: False 在处理原始的、杂乱的数据时,您可能最初会读入杂乱的数据(可能具有重复的标签),进行重复数据删除,然后禁止继续重复,以确保您的数据管道不会引入重复项。 >>> raw = pd.read_csv("...") >>> deduplicated = raw.groupby(level=0).first() # remove duplicates >>> deduplicated.flags.allows_duplicate_labels = False # disallow going forward allows_duplicate_labels=False在 aSeries或上设置重复标签或执行在 a或 DataFrame上引入重复标签且不允许重复的操作将引发 .SeriesDataFrameerrors.DuplicateLabelError In [28]: df.rename(str.upper) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[28], line 1 ----> 1 df.rename(str.upper) File ~/work/pandas/pandas/pandas/core/frame.py:5767, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 5636 def rename( 5637 self, 5638 mapper: Renamer | None = None, (...) 5646 errors: IgnoreRaise = "ignore", 5647 ) -> DataFrame | None: 5648 """ 5649 Rename columns or index labels. 5650 (...) 5765 4 3 6 5766 """ -> 5767 return super()._rename( 5768 mapper=mapper, 5769 index=index, 5770 columns=columns, 5771 axis=axis, 5772 copy=copy, 5773 inplace=inplace, 5774 level=level, 5775 errors=errors, 5776 ) File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 1138 return None 1139 else: -> 1140 return result.__finalize__(self, method="rename") File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs) 6255 if other.attrs: 6256 # We want attrs propagation to have minimal performance 6257 # impact if attrs are not used; i.e. attrs is an empty dict. 6258 # One could make the deepcopy unconditionally, but a deepcopy 6259 # of an empty dict is 50x more expensive than the empty check. 6260 self.attrs = deepcopy(other.attrs) -> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels 6263 # For subclasses using _metadata. 6264 for name in set(self._metadata) & set(other._metadata): File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value) 94 if not value: 95 for ax in obj.axes: ---> 96 ax._maybe_check_unique() 98 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self) 712 duplicates = self._format_duplicate_message() 713 msg += f"\n{duplicates}" --> 715 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label X [0, 2] Y [1, 3] 此错误消息包含重复的标签,以及或中所有重复项(包括“原始”)的数字Series位置DataFrame 重复标签传播# 一般来说,不允许重复是“粘性的”。它通过操作得以保留。 In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False) In [30]: s1 Out[30]: a 0 b 0 dtype: int64 In [31]: s1.head().rename({"a": "b"}) --------------------------------------------------------------------------- DuplicateLabelError Traceback (most recent call last) Cell In[31], line 1 ----> 1 s1.head().rename({"a": "b"}) File ~/work/pandas/pandas/pandas/core/series.py:5090, in Series.rename(self, index, axis, copy, inplace, level, errors) 5083 axis = self._get_axis_number(axis) 5085 if callable(index) or is_dict_like(index): 5086 # error: Argument 1 to "_rename" of "NDFrame" has incompatible 5087 # type "Union[Union[Mapping[Any, Hashable], Callable[[Any], 5088 # Hashable]], Hashable, None]"; expected "Union[Mapping[Any, 5089 # Hashable], Callable[[Any], Hashable], None]" -> 5090 return super()._rename( 5091 index, # type: ignore[arg-type] 5092 copy=copy, 5093 inplace=inplace, 5094 level=level, 5095 errors=errors, 5096 ) 5097 else: 5098 return self._set_name(index, inplace=inplace, deep=copy) File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors) 1138 return None 1139 else: -> 1140 return result.__finalize__(self, method="rename") File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs) 6255 if other.attrs: 6256 # We want attrs propagation to have minimal performance 6257 # impact if attrs are not used; i.e. attrs is an empty dict. 6258 # One could make the deepcopy unconditionally, but a deepcopy 6259 # of an empty dict is 50x more expensive than the empty check. 6260 self.attrs = deepcopy(other.attrs) -> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels 6263 # For subclasses using _metadata. 6264 for name in set(self._metadata) & set(other._metadata): File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value) 94 if not value: 95 for ax in obj.axes: ---> 96 ax._maybe_check_unique() 98 self._allows_duplicate_labels = value File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self) 712 duplicates = self._format_duplicate_message() 713 msg += f"\n{duplicates}" --> 715 raise DuplicateLabelError(msg) DuplicateLabelError: Index has duplicates. positions label b [0, 1] 警告 这是一个实验性功能。目前,许多方法无法传播该allows_duplicate_labels值。在未来的版本中,预计每个获取或返回一个或多个 DataFrame 或 Series 对象的方法都会传播allows_duplicate_labels。