id
int64 | number
int64 | title
string | state
string | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | html_url
string | is_pull_request
bool | pull_request_url
string | pull_request_html_url
string | user_login
string | comments_count
int64 | body
string | labels
list | reactions_plus1
int64 | reactions_minus1
int64 | reactions_laugh
int64 | reactions_hooray
int64 | reactions_confused
int64 | reactions_heart
int64 | reactions_rocket
int64 | reactions_eyes
int64 | comments
list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,305,531,502
| 62,079
|
DOC: Add docstring for asarray_tuplesafe in common.py
|
open
| 2025-08-09T01:22:19
| 2025-08-21T20:39:25
| null |
https://github.com/pandas-dev/pandas/pull/62079
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62079
|
https://github.com/pandas-dev/pandas/pull/62079
|
eduardocamacho10
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,305,507,357
| 62,078
|
DOC: Validate docstrings for tseries.offsets.BusinessDay
|
open
| 2025-08-09T00:50:28
| 2025-08-09T00:50:29
| null |
https://github.com/pandas-dev/pandas/pull/62078
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62078
|
https://github.com/pandas-dev/pandas/pull/62078
|
doughillyer
| 0
|
- [x] closes #58063 (partially)
- [ ] Tests added and passed if fixing a bug or adding a new feature
- [x] All code checks passed.
- [ ] Added type annotations to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
---
This PR validates and fixes the docstrings for `pandas.tseries.offsets.BusinessDay` and several of its properties to conform with the NumPy docstring standard.
The following changes were made:
- Moved the `Parameters` section from the `BusinessDay` class docstring to the `__init__` method of the parent `BusinessMixin` class to resolve a `PR02` validation error. This aligns with the NumPy docstring standard.
- Added fully-featured docstrings (including `See Also` and `Examples`) to the `.calendar`, `.holidays`, and `.weekmask` properties to resolve `GL08` errors.
- Removed the now-passing methods from the ignore list in `ci/code_checks.sh`.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,305,401,725
| 62,077
|
BUG: Bug fix for implicit upcast to float64 for large series (more than 1000000 rows)
|
open
| 2025-08-08T23:00:07
| 2025-08-08T23:06:55
| null |
https://github.com/pandas-dev/pandas/pull/62077
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62077
|
https://github.com/pandas-dev/pandas/pull/62077
|
allecole
| 1
|
## PR Requirements
- [x] closes #61951
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v3.0.0.rst` file if fixing a bug or adding a new feature.
## Issue Description
Performing binary operations on larger `Series` with `dtype == 'float32'` leads to unexpected upcasts to float64.
Above example prints `float32 float64`.
Using `to_numpy()` on the series before addition inhibits the implicit upcast.
## Expected Behavior
I expect above snippet to print `float32 float32`.
## Bug Fix
Changed scalar conversion logic for NumPy floating scalars to avoid automatic conversion to Python `float`. Now returns a scalar of the original NumPy dtype to preserve type and prevent unintended dtype upcasts.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix"
] |
3,304,976,846
| 62,076
|
BUG: Fix tz_localize(None) with Arrow timestamp
|
closed
| 2025-08-08T19:10:09
| 2025-08-21T11:16:25
| 2025-08-11T16:29:28
|
https://github.com/pandas-dev/pandas/pull/62076
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62076
|
https://github.com/pandas-dev/pandas/pull/62076
|
yuanx749
| 2
|
- [x] closes #61780
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Timezones",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @yuanx749 ",
"Disappointed to see this wasn't in the 2.3.2 release.\r\n\r\nThis bug is a ***silent data corruption bug*** so IMO should be released ASAP."
] |
3,304,867,341
| 62,075
|
DOC: Fix docstring validation errors for pandas.util.hash_pandas_object
|
open
| 2025-08-08T18:21:03
| 2025-08-17T16:06:22
| null |
https://github.com/pandas-dev/pandas/pull/62075
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62075
|
https://github.com/pandas-dev/pandas/pull/62075
|
EliotGreenhalgh123
| 0
|
- [ ] partially addresses #58063.
- [X] Passed python scripts/validate_docstrings.py pandas.util.hash_pandas_object (No changes to code, no additional tests)
- [X] All [code checks passed]
- [ ] Added [type annotations] NA: no new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file. NA: no new feature or bug fix.
-Description: fixed PR07, SA01 docstring validation errors for pandas.util.hash_pandas_object
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,304,386,279
| 62,074
|
BUG(CoW): also raise for chained assignment for .at / .iat
|
closed
| 2025-08-08T15:12:18
| 2025-08-15T08:31:48
| 2025-08-08T16:42:48
|
https://github.com/pandas-dev/pandas/pull/62074
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62074
|
https://github.com/pandas-dev/pandas/pull/62074
|
jorisvandenbossche
| 3
|
Noticed while working on https://github.com/pandas-dev/pandas/issues/61368. I assume this was an oversight in the original PR https://github.com/pandas-dev/pandas/pull/49467 adding those checks. Also added some more explicit tests for all of loc/iloc/at/iat
|
[
"Copy / view semantics"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jorisvandenbossche ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 b84ac0ec647902b9dd0c1bcda790b289f577b617\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #62074: BUG(CoW): also raise for chained assignment for .at / .iat'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-62074-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #62074 on branch 2.3.x (BUG(CoW): also raise for chained assignment for .at / .iat)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62115"
] |
3,304,365,588
| 62,073
|
MNT: migrate from codecs.open to open
|
closed
| 2025-08-08T15:04:46
| 2025-08-08T16:43:46
| 2025-08-08T16:43:40
|
https://github.com/pandas-dev/pandas/pull/62073
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62073
|
https://github.com/pandas-dev/pandas/pull/62073
|
ngoldbaum
| 1
|
c.f. #61950, towards supporting Python 3.14
See [the 3.14 deprecations](https://docs.python.org/3.14/whatsnew/3.14.html#deprecated) and https://github.com/python/cpython/issues/133036 for more details.
There is one spot that was nontrivial - the use in `pandas.util._print_versions` I think was buggy - it wanted to write a file in bytes mode and also specified an encoding. `codecs.open` didn't blink at this but regular `open` is more restrictive. I think what I updated it to is correct and this is a latent bug, probably introduced a long time ago in the 2->3 transition.
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @ngoldbaum "
] |
3,304,362,255
| 62,072
|
BUG: DataFrame.to_json() does not accurately serialize floats such that they can be deserialized without lost information
|
open
| 2025-08-08T15:03:26
| 2025-08-14T00:44:42
| null |
https://github.com/pandas-dev/pandas/issues/62072
| true
| null | null |
PentageerJoshuaMeetsma
| 5
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({"x":[0.2617993877991494,0.111111111111111112]}) # two such examples that result in incorrect truncation
df.to_json("out.json",double_precision=15)
df2 = pd.read_json("out.json")
```
### Issue Description
Using the `DataFrame.to_json()` method to serialize a `DataFrame` can result in lost data when serializing floats, such that the data cannot be recovered when reloading the json file, even when using the maximum allowable `double_precision` parameter, at 15.
This is the result of the `to_json()` method incorrectly truncating floats when they should instead be reproduced in full. This seems to perhaps even be an intended behaviour, as the default value of the `double_precision` parameter is not even 15, but 10, resulting in even further truncation and lost data. This should not be the case, as the output of the json format stores all numbers as text strings, so there is not an inherent loss in data from the format, and a user should reasonably be able to fully retrieve an exact copy of the data they have saved in the json format at a later time.
### Expected Behavior
```python
df = pd.DataFrame({"x":[0.2617993877991494,0.111111111111111112]})
df.to_json("out.json",double_precision=15)
df2 = pd.read_json("out.json")
print(df["x"][0],df["x"][1])
# output: 0.2617993877991494 0.11111111111111112
print(df2["x"][0],df2["x"][1])
# expected output: 0.2617993877991494 0.11111111111111112
# actual output: 0.261799387799149 0.11111111111111101
print(df["x"][0]==df2["x"][0])
# expected output: True (as all we have done is saved the data to the json format and reloaded it)
# actual output : False
print(df["x"][1]==df2["x"][1])
# expected output: True (as all we have done is saved the data to the json format and reloaded it)
# actual output : False
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.2
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 2, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_Canada.1252
pandas : 2.2.3
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.2
Cython : None
sphinx : 8.2.3
IPython : 9.3.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : None
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.3
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"IO JSON",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@PentageerJoshuaMeetsma Hey can i work on this?",
"Can I implement a fix?\n",
"The float precision gets lost in DataFrame.to_json() because the default JSON encoder doesn’t keep enough decimal places, to fix it you can convert the floats to strings with high precision (for example, using format(value, '.17g')) before saving to JSON. This way, the values stay exact when you write and read the JSON back",
"> The float precision gets lost in DataFrame.to_json() because the default JSON encoder doesn’t keep enough decimal places, to fix it you can convert the floats to strings with high precision (for example, using format(value, '.17g')) before saving to JSON. This way, the values stay exact when you write and read the JSON back\n\nThis is the solution I have been using, but it's definitely a bit of a hack. If I intend to save a floating point decimal using JSON, it seems to me that I should at very least be able to fully preserve that floating point decimal at full precision, and that should probably even be the default behaviour. What's the point of being able to serialize the data in a given format if the serialization does not actually preserve the data in that format with any integrity? Why not just have the `.to_json()` function just run the string conversion automatically, so that you can't save it as a decimal but can save it as a string, but are at least guaranteed to be able to retrieve your data with full integrity?",
"> [@PentageerJoshuaMeetsma](https://github.com/PentageerJoshuaMeetsma) Hey can i work on this?\n\n> Can I implement a fix?\n\nI don't mind if anyone else wants to implement a fix, I was not planning on writing a fix myself.\n\nI reported this issue because although the behaviour seems to be working as intended, I think the behaviour is still incorrect from what it *should* do, and by reporting this as a bug there can be a discussion about whether there should be a different behaviour or not. This is a bug report that is almost more about the *philosophy* of what JSON serialization is supposed to accomplish rather than a specific issue of implementation.\n"
] |
3,303,877,602
| 62,071
|
BUG:Pandas: Custom Metadata in DataFrame Subclass Not Deep-Copied
|
open
| 2025-08-08T12:27:24
| 2025-08-14T00:45:09
| null |
https://github.com/pandas-dev/pandas/issues/62071
| true
| null | null |
Qi-henry
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
class MyDataFrame(pd.DataFrame):
_metadata = [
'name',
'extra_info',
]
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.name = None
self.extra_info = {}
@property
def n_unique_wafer(self):
if 'wafer' in self.columns:
return len(self['wafer'].unique())
else:
return None
@property
def _constructor(self):
return MyDataFrame
if __name__ == "__main__":
data = {
'A': [1, 2, 3, 1, 2],
'B': ['A', 'B', 'A', 'C', 'B']
}
df = MyDataFrame(data)
df.extra_info = {"source": "Lab Experiment"}
# test copy()
copied_df = df.copy()
df.extra_info["source"] = 'a'
print("Extra Info:", copied_df.extra_info)
print("df Extra Info:", df.extra_info) # extra_info in df is changed
```
### Issue Description
When using a custom subclass of pandas.DataFrame with additional metadata attributes (e.g., extra_info declared in _metadata), calling df.copy() or df.copy(deep=True) does not deep-copy the custom metadata attributes. Instead, the metadata remains shallow-copied, causing unintended shared references between the original and copied DataFrames.
### Expected Behavior
Expected:
copied_df.extra_info should retain the original value {"source": "Original Data"}.
df.extra_info modifications should not affect the copy.
Actual:
Both the original and copied DataFrame share the same metadata object.
df.copy(deep=True) does not deep-copy the _metadata attributes.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.9.13
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.19045
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE :
pandas : 2.3.1
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : 3.0.12
sphinx : None
IPython : 8.18.1
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.1
sqlalchemy : None
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : 3.2.0
zstandard : None
tzdata : 2025.2
qtpy : 2.4.2
pyqt5 : None
</details>
|
[
"Bug",
"metadata",
"Needs Triage",
"Subclassing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"It seems this is expected according to the note in [df.copy](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.copy.html#pandas.DataFrame.copy).\n\n>When deep=True, data is copied but actual Python objects will not be copied recursively, only the reference to the object.\n",
"So what should I do to avoid changing _metadata?",
"I dug in and the simplest fix seems to be: when NDFrame.copy(deep=True) creates the new object, deepcopy each attribute listed in _metadata (with a safe fallback if deepcopy fails) and keep the current shallow behavior for deep=False. I’ve got unit tests that show this stops sharing mutable metadata (dicts/lists) between original and copy. If that sounds good I can open a PR with the patch, tests, and a short changelog , otherwise I’m happy to switch this to a config/opt-in approach if maintainers prefer."
] |
3,303,183,371
| 62,070
|
Move chained assignment detection to cython for Python 3.14 compat
|
open
| 2025-08-08T08:42:14
| 2025-08-15T16:48:40
| null |
https://github.com/pandas-dev/pandas/pull/62070
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62070
|
https://github.com/pandas-dev/pandas/pull/62070
|
jorisvandenbossche
| 1
|
Draft exploring a solution for https://github.com/pandas-dev/pandas/issues/61368
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Some commented-out code needs to be removed, otherwise looks nice"
] |
3,302,395,020
| 62,069
|
DOC: Sort the pandas API reference navbar in alphabetical order
|
closed
| 2025-08-08T02:19:34
| 2025-08-08T15:58:53
| 2025-08-08T15:58:53
|
https://github.com/pandas-dev/pandas/pull/62069
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62069
|
https://github.com/pandas-dev/pandas/pull/62069
|
DoNguyenHung
| 1
|
- [x] closes #59164
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Changes made:
- Added JavaScript to sort only the items in each section of the sidebar and not the section headers.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the PR, but as noted in the issue it still `needs discussion` on the solution to move forward so closing"
] |
3,302,254,495
| 62,068
|
Update pytables.py to include a more thorough description of the min_itemsize variable
|
open
| 2025-08-08T00:35:29
| 2025-08-08T15:37:54
| null |
https://github.com/pandas-dev/pandas/pull/62068
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62068
|
https://github.com/pandas-dev/pandas/pull/62068
|
JoeDediop
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I am a student contributing for the first time, none of the additions were generated with AI and this PR description is not generated by AI.
I am addressing Issue 14601 where the current documentation for 'min_itemsize' is not thorough enough. It seems to me based on my research that the issue is that people treat it as though it counts characters instead of byte size for strings being input into a table.
I kept the additions simple to avoid any confusion and am open to suggestions on how I can improve this contribution. If there are any specific issues please point me to the location in your contribution guide so I can better understand any errors I might have made.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Let’s start with the CI failures. Can you address them?"
] |
3,302,011,263
| 62,067
|
DOC: Docstring additions for min_itemsize
|
closed
| 2025-08-07T22:02:13
| 2025-08-07T23:49:18
| 2025-08-07T23:49:17
|
https://github.com/pandas-dev/pandas/pull/62067
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62067
|
https://github.com/pandas-dev/pandas/pull/62067
|
JoeDediop
| 1
|
## Problem Summary
The current pandas documentation for `min_itemsize` in HDFStore methods doesn’t clearly explain that it refers to byte length, not character length. This causes confusion when working with multi-byte characters.
## Proposed Addition to HDFStore.put() and HDFStore.append() docstrings
Add this clarification to the `min_itemsize` parameter description in the appropriate methods:
```
min_itemsize : int, dict, or None, default None
Minimum size in bytes for string columns when format='table'.
int - Apply the same minimum size to all string columns,
dict - Map column names to their minimum sizes or,
None - use the default the sizing
Important: This specifies the byte length after encoding, not the
character count. For multi-byte characters, calculate the required
size using the encoded byte length.
See examples below for use.
```
And adding this to the example section for each docstring:
```
Examples:
- ASCII 'hello' = 5 bytes
- UTF-8 '香' = 3 bytes (though only 1 character)
- To find byte length: len(string.encode('utf-8'))
```
## Why This Helps
- Directly addresses the confusion in issue #14601
- Provides practical guidance for users working with international text
- Keeps the explanation concise and focused
- Includes actionable examples
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"AI-generated pull requests are not welcome in this project, closing"
] |
3,301,539,329
| 62,066
|
DEPS: Drop Python 3.10
|
closed
| 2025-08-07T18:46:55
| 2025-08-08T20:39:16
| 2025-08-08T19:54:59
|
https://github.com/pandas-dev/pandas/pull/62066
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62066
|
https://github.com/pandas-dev/pandas/pull/62066
|
mroeschke
| 1
|
- [ ] closes https://github.com/pandas-dev/pandas/issues/60059 (Replace xxxx with the GitHub issue number)
|
[
"Build",
"Deprecate",
"Python 3.10"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @mroeschke "
] |
3,301,491,156
| 62,065
|
DEPS: Bump adbc-driver-postgresql/sqlite to 1.2
|
closed
| 2025-08-07T18:26:54
| 2025-08-11T20:59:04
| 2025-08-11T20:59:00
|
https://github.com/pandas-dev/pandas/pull/62065
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62065
|
https://github.com/pandas-dev/pandas/pull/62065
|
mroeschke
| 1
|
These will have been out for ~1 year by the time (hopefully) pandas 3.0 comes out
|
[
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"needs rebase, otherwise lgtm"
] |
3,301,419,589
| 62,064
|
ENH: Persistent and Interoperable Accessors for NDFrame and Index (sliding window POC)
|
closed
| 2025-08-07T18:01:32
| 2025-08-08T15:57:13
| 2025-08-08T15:57:13
|
https://github.com/pandas-dev/pandas/issues/62064
| true
| null | null |
dangreb
| 3
|
### Feature Type
- [x] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
### Please regard
_Took some free time this month for a run of the mill architecture assessment over pandas Python sources, this is delivery.
For health related reasons, I use LLMs redactions to prevent severe prolix on public comms. I'll mark personal remarks in italics.
Engage at will, worry free._
### Summary
This issue proposes an enhancement to the internal architecture of accessors in pandas, aiming to support **persistent, interoperable accessors** that can coexist with and respond to their host `NDFrame` or `Index` objects in a lifecycle-aware and low-copy fashion.
### Motivation
Accessors in pandas today (e.g., `.str`, `.dt`, `.cat`, `.attrs`) are **ephemeral**, created on-demand for every access. While this is lightweight and efficient for simple use cases, it limits their utility in richer modeling contexts, especially for:
- Persisting contextual or intermediate state;
- Synchronizing behavior after host transformations (e.g., `.copy()`, slicing, chaining);
- Acting as first-class modeling layers inside pandas pipelines.
These limitations are especially relevant in **data engineering and ML pipelines**, where pandas often loses its centrality once feature computation becomes non-trivial — forcing users to fall back to NumPy, custom classes, or external orchestration layers.
_Also beware of safe excess over implementaion of promissing concepts around the topic. Obstructive inclinations drives punctual desertion, often followed by antithetical responses, the well known crux of enhencement frameworks design. Structure that is expressive and interoperable, supports continuity, comprehension, and reuse, granting high reach under few familiar instrument sets should principle approach.__
### Feature Description
### Proposal
Introduce a model for **persistent accessors**, which:
- Are instantiated once per host instance and survive as long as the host exists;
_- Lifetime enroll either to the high level Pandas Object, the underlying data suplly, or be a composite of both legs_
- Can respond to critical transformations on the host (e.g., copy, assignment, indexing);
- Are managed via hooks, weak references, or catalog mechanisms;
- Remain interoperable with existing pandas workflows;
- Allow extension via a structured API for third parties.
_Advise tripartit roadmap, issuing persistence and minimal lifetime only hooks first, follows immersion for carefull goldilock hooks placement for full managed event propagation design. Finally, design and edify provision infrastructure, availing accessors with outlined read access to source data entities._
This does **not** require a full architectural rewrite, and can initially be scoped to enable a formal accessor lifecycle with opt-in semantics and soft hooks.
### Alternative Solutions
### Proof of Concept (POC)
I'm currently prototyping an accessor extension under a sliding window use case. The idea is to offer **vertical rolling windows** of fixed size, supporting:
- Multiple successive rolls along axis 0 (depth stacking);
- Minimal memory footprint (zero-copy across views);
- "Scoped local views" for stateless metric computation, visual flattening, and full-dimensional window analysis;
- A lightweight dispatcher for `apply`-like routines, with hyperparameter support and batch-level threading (GIL-free environments only).
**Key implementation details include:**
- Accessors are indexed by a catalog based on `weakref.WeakKeyDictionary`, using `attrs` as the sole strong reference;
- A root hook object (UUID-backed immutable `set` subclass) is stored in `.attrs`, which carries the accessor identity;
- Copy operations trigger deep hooks via custom `__deepcopy__`, enabling propagation of the accessor along with host duplication;
- When the host NDFrame is garbage collected, the accessor is finalized via `weakref.finalize`;
- NDArray-level hooks are being tested to track mathematical and logical transformations on the underlying data;
- Implementation is being tested in a dev environment with no-GIL support, built using scientific nightly wheels via Mamba.
This approach is entirely backward-compatible and demonstrates how a structured accessor lifecycle could empower pandas to participate in more advanced modeling scenarios — without becoming a full modeling framework itself.
### Additional Context
### Use Case Implications
Persistent, interoperable accessors would allow:
- Declarative pipelines inside pandas (e.g., `.fe`, `.validate`, `.track`, etc.);
- Advanced feature engineering without leaving NDFrame;
- Safer data transformation propagation (accessor-aware copies and views);
- Reuse and encapsulation of logic across modeling contexts.
_- Architectural equidistance with _
It could also help pandas reclaim some conceptual territory currently dominated by _ honestly interface-wise dead ringer doppelganger marvels__ (e.g., Polars, Modin, cuDF), by moving beyond isomorphic syntax and toward architectural expression.
|
[
"Enhancement",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This looks like AI slop. Can you give me a human-only Tl;dr? An example of lack of persistence being a problem?",
"> This looks like AI slop. Can you give me a human-only Tl;dr? An example of lack of persistence being a problem?\n\nThere's no context of problems, as far as i'm concerned. As is should meet intended requirements, as it does. I speak of new functionality motivated by hindered potential. My irelevant sensibilities questions instance spamming against OO syntax addiction, or perhaps another groundhog day easter egg? Merely aesthetics verbatim ofc, dont be mad pls.\n\nAdvice is for revision standing on more ambitious grounds. OP redaction punctuates my proposal, a solution for component accessors as first class citizens, federated to runtime conviviality and properly supplied with reading privileges over data primitives owned by the host. Granular design of eevent propagation mechanics as natural gauge for it's own availability, measured against criteria established as equaly granular handling interface entrypoints ABC. Max allowed propagation hooks per accessor for performance and narrower, specialized accessors. \n\nSuccessful imlementation defeats impending incentives for transition into array only scopes for computation intersive operations. Now one can alternate operations aiming dataframes, ndarrays, arrows, etc under the same modular composition. Upgrades DataFrames to potential cornerstones for modular compositions, enrolling a variety of independent functionalities into the range of its popular instruments. Add to that some essentials, like on demand accessor local data rendering as dataframes, on demand consolitated output composer, and so forth. \n\nAnyway i'll finish the demo i mentioned first, then i should act further on it, you be so kind!",
"Thanks for the issue, but it appears this is not actionable at this time so closing. Feel free to open a follow up issue if there's a specific ask with regards to creating a pandas accessor"
] |
3,301,083,705
| 62,063
|
BUG: is_scalar(Enum) gives False
|
open
| 2025-08-07T16:02:45
| 2025-08-24T12:46:18
| null |
https://github.com/pandas-dev/pandas/issues/62063
| true
| null | null |
cmp0xff
| 9
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from enum import Enum, auto
from pandas.api.types import is_scalar
class Thing(Enum):
one = auto()
two = auto()
print(is_scalar(Thing.one)) # False
```
### Issue Description
`pandas` does not take `Enum`s as a Scalar.
Inspired by https://github.com/pandas-dev/pandas-stubs/issues/1288#issuecomment-3157127431.
### Expected Behavior
gives True
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.13.5
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.26100
machine : AMD64
processor : AMD64 Family 25 Model 116 Stepping 1, AuthenticAMD
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : Czech_Czechia.1252
pandas : 2.3.1
numpy : 2.2.6
pytz : 2023.4
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : 6.136.5
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.16.1
sqlalchemy : 2.0.41
tables : None
tabulate : None
xarray : 2025.7.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"This behavior is consistent with the current pandas implementation of pandas.api.types.is_scalar.\nAccording to the documentation, only a specific set of built-in and pandas-native types are considered scalars (e.g., Python numeric types, strings, datetimes, Timedeltas, Periods, Decimals, Intervals, Fractions, etc.).\n\nSince Enum members are user defined objects and not part of this explicit list, is_scalar(Thing.one) correctly returns False.\nIf I have misunderstood this reasoning, please feel free to correct me.",
"Hi @Aniketsy , thank you for the comment. As written in the Issue Description, this issue #62063 comes from the use case in pandas-dev/pandas-stubs#1288\n```py\npd.Series([Thing.ONE, Thing.TWO]).eq(Thing.ONE)\n```\nwhere `Thing` is an `Enum`. Type checkers mark this line of code as mistyped, because `eq` only supports Pandas scalars, as is in Pandas documentation.\n\nThe use case can be fixed either by relaxing the typing of `eq` (not restricted to Pandas scalars), or relaxing the definition of Pandas scalars (include everything that can be the element of Series).",
"@cmp0xff Thanks for the clarification.\n\nIf @shrutisachan08 is not actively working on this, I’d be glad to help and work on a fix. Of course, I’ll wait for confirmation before proceeding. Please let me know if that would be appropriate.\n\n",
"I would like to mention that I am currently working on this issue .",
"This issue here is one of documentation. `is_scalar` is reporting on scalar values that can be in various (non-object) pandas dtypes whereas the documentation here is differentiating between two different types of input: list-like (though it says `Series`) and non-list-like. I think specifying that anything beyond `np.ndarray`, `list`, `tuple`, and `Series` will be treated as a scalar value would clarify here.",
"> This issue here is one of documentation. `is_scalar` is reporting on scalar values that can be in various (non-object) pandas dtypes whereas the documentation here is differentiating between two different types of input: list-like (though it says `Series`) and non-list-like. I think specifying that anything beyond `np.ndarray`, `list`, `tuple`, and `Series` will be treated as a scalar value would clarify here.\n\nHi @rhshadrach which documentation are your refering to? The docstring for is_scalar() method in _libs\\lib.pyx or the docstring of pandas.Series.eq() method which is used in [https://github.com/pandas-dev/pandas-stubs/issues/1288](url)?\n\nAsking this becasue I am new to github, python, and pandas and would like to take this issue as my first issue. \n\n**Updated:** I think it is pandas.Series.eq() where the docstring should be changed. I will take this.\n\nSincere regards\nMaz",
"take",
"@mazhar996 - my comment above is referring to the documentation of `eq`."
] |
3,297,364,293
| 62,062
|
Update put() and append() docstrings
|
closed
| 2025-08-06T16:42:25
| 2025-08-06T16:43:07
| 2025-08-06T16:43:07
|
https://github.com/pandas-dev/pandas/pull/62062
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62062
|
https://github.com/pandas-dev/pandas/pull/62062
|
JoeDediop
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,297,149,062
| 62,061
|
DOC: There is an unreadable dark text on the dark background in the getting started page
|
closed
| 2025-08-06T15:24:26
| 2025-08-06T15:30:58
| 2025-08-06T15:30:58
|
https://github.com/pandas-dev/pandas/issues/62061
| true
| null | null |
surenpoghosian
| 1
|
The text under the "Intro to pandas" section is unreadable due to being displayed in a dark color over the dark background.
</br>
<img width="1512" height="853" alt="Image" src="https://github.com/user-attachments/assets/38020b97-44cb-4293-ab71-40bcb28e4f51" />
</br>
</br>
The issue is valid for the `2.3` and `2.2` versions.
- [2.3](https://pandas.pydata.org/pandas-docs/version/2.3/getting_started/index.html#intro-to-pandas)
- [2.2](https://pandas.pydata.org/pandas-docs/version/2.2/getting_started/index.html#intro-to-pandas)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This will be fixed once pandas 3.0 is released (currently the dev docs is more visible)\n\nhttps://pandas.pydata.org/docs/dev/getting_started/index.html"
] |
3,297,104,474
| 62,060
|
API: make construct_array_type non-classmethod
|
closed
| 2025-08-06T15:10:09
| 2025-08-10T16:20:32
| 2025-08-06T17:29:10
|
https://github.com/pandas-dev/pandas/pull/62060
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62060
|
https://github.com/pandas-dev/pandas/pull/62060
|
jbrockmendel
| 1
|
- [x] closes #58663 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Makes the method robust to the possibility of keywords (e.g. na_value, storage) that determine what EA subclass you get. StringDtype already does this.
|
[
"ExtensionArray"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,296,747,180
| 62,059
|
BUG: Inconsistent behavior of `MultiIndex.union` depending on duplicates and names
|
open
| 2025-08-06T13:35:07
| 2025-08-17T17:19:20
| null |
https://github.com/pandas-dev/pandas/issues/62059
| true
| null | null |
torfsen
| 0
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
index1_without_name = pd.Index([1, 2])
index1_with_name = pd.Index([1, 2], name="x")
index2_without_duplicates = pd.Index([2, 3])
index2_with_duplicates = pd.Index([2, 3, 2])
multi_index1_without_name = pd.MultiIndex.from_tuples([(1, "a"), (2, "b")])
multi_index1_with_name = pd.MultiIndex.from_tuples([(1, "a"), (2, "b")], names=["x", "y"])
multi_index2_without_duplicates = pd.MultiIndex.from_tuples([(2, "b"), (3, "c")])
multi_index2_with_duplicates = pd.MultiIndex.from_tuples([(2, "b"), (3, "c"), (2, "b")])
# These work
print(index1_without_name.union(index2_without_duplicates))
print(index1_without_name.union(index2_with_duplicates))
print(index1_with_name.union(index2_without_duplicates))
print(index1_with_name.union(index2_with_duplicates))
print(multi_index1_without_name.union(multi_index2_without_duplicates))
print(multi_index1_without_name.union(multi_index2_with_duplicates))
print(multi_index1_with_name.union(multi_index2_without_duplicates))
# This one raises
print(multi_index1_with_name.union(multi_index2_with_duplicates))
```
### Issue Description
For 2 `MultiIndex` instances `i1` and `i2`, `i1.union(i2)` behaves inconsistently depending on whether `i1` has names and whether `i2` has duplicates:
* If `i1` has no names or `i2` has no duplicates then `i1.union(i2)` works as expected
* If `i1` has names and `i2` has duplicates then `i1.union(i2)` raises `ValueError: cannot join with no overlapping index names`
In addition, if `i1` and `i2` are plain `Index` instances, then the case that is problematic for `MultiIndex` (names and duplicates) works as expected.
### Expected Behavior
I expect no exception to be raised. The result should contain the duplicate values of the second `MultiIndex` as duplicates, just as in the other cases for consistency (although personally this did surprise me, but that's a different topic).
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.12.7
python-bits : 64
OS : Linux
OS-release : 6.14.0-27-generic
Version : #27~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 22 17:38:49 UTC 2
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : 2.10.2
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 18.0.0
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : 3.10.1
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"MultiIndex",
"Needs Triage",
"setops"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,296,689,449
| 62,058
|
DOC: add 'Try Pandas Online' section to the 'Getting Started' page
|
closed
| 2025-08-06T13:18:59
| 2025-08-06T15:45:12
| 2025-08-06T15:45:11
|
https://github.com/pandas-dev/pandas/pull/62058
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62058
|
https://github.com/pandas-dev/pandas/pull/62058
|
surenpoghosian
| 1
|
- [x] closes #61060
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.3.2.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the pull request, but I think this is being worked on as apart of https://github.com/pandas-dev/pandas/pull/61061 so closing since that contributor also added this module to link to our docs. Happy to have your contribution on other un-assigned issues"
] |
3,296,568,167
| 62,057
|
DOC: DataFrame.to_feather() does not accept *all* file-like objects
|
closed
| 2025-08-06T12:44:29
| 2025-08-06T21:58:34
| 2025-08-06T21:58:34
|
https://github.com/pandas-dev/pandas/issues/62057
| true
| null | null |
qris
| 4
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_feather.html
### Documentation problem
Since #35408, the method docs say:
> path str, path object, file-like object
> String, path object (implementing os.PathLike[str]), or file-like object implementing a binary write() function.
And indeed it does work with a BytesIO buffer or an open *file*:
```python
with open('/tmp/foo', 'wb') as handle:
df.to_feather(handle)
```
But not with other file-like objects, such as an `AsyncWriter` from `hdfs.InsecureClient.write()`:
```python
with self.client.write(self._path(name)) as writer:
df.to_feather(writer)
Traceback (most recent call last):
File "/home/chris/ram-system/.venv/lib/python3.10/site-packages/pyarrow/feather.py", line 186, in write_feather
_feather.write_feather(table, dest, compression=compression,
AttributeError: 'AsyncWriter' object has no attribute 'closed'
ValueError: I/O operation on closed file
```
I note that it's not actually supposed to work: [pyarrow.feather.write_feather](https://arrow.apache.org/docs/python/generated/pyarrow.feather.write_feather.html) says:
> dest[str](https://docs.python.org/3/library/stdtypes.html#str)
> Local destination path.
Which says nothing about file-like objects being acceptable. It does seem to have some special cases for handling buffers specifically, but this is undocumented and could change at any time.
I think that `write_feather` insists on checking the `closed` attribute of the passed handle, which this one doesn't have. It seems to work if I poke such an attribute onto the object, but it could easily stop working.
Also I know about [hdfs.ext.dataframe.write_dataframe](https://hdfscli.readthedocs.io/en/latest/api.html#module-hdfs.ext.kerberos) for this particular use case, but it only supports Avro which is not a great file format for DataFrames, and there are likely to be other *file-like* objects that people might try to pass to `to_feather()`.
Similarly, [read_feather](https://pandas.pydata.org/docs/reference/api/pandas.read_feather.html) claims to accept:
> pathstr, path object, or file-like object
> String, path object (implementing os.PathLike[str]), or file-like object implementing a binary read() function.
But `read()` is not enough:
```
File "pyarrow/_feather.pyx", line 79, in pyarrow._feather.FeatherReader.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 89, in pyarrow.lib.check_status
io.UnsupportedOperation: seek
```
### Suggested fix for documentation
I think it's better to describe these functions as officially taking only strings (URLs and paths) and mmap objects. File-like objects currently work but this is not guaranteed.
|
[
"Docs",
"IO Parquet",
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. You mention:\n\n> I note that it's not actually supposed to work: [pyarrow.feather.write_feather](https://arrow.apache.org/docs/python/generated/pyarrow.feather.write_feather.html) says:\n> ...\n\nI do not follow this. What does this have to do whether `pd.write_feather` supports file-like objects?\n\n> But not with other file-like objects, such as an `AsyncWriter` from `hdfs.InsecureClient.write()`:\n\nIf the object does not have a `closed` attribute, then it is not implementing [IOBase](https://docs.python.org/3/library/io.html#io.IOBase), and therefore is not file-like.\n",
"Thanks for the quick reply!\n\n> I do not follow this. What does this have to do whether `pd.write_feather` supports file-like objects?\n\nI mean that Pandas [delegates](https://github.com/pandas-dev/pandas/blob/v2.3.1/pandas/io/feather_format.py#L38) to_feather() to `feather.write_feather`:\n\n```python\n from pyarrow import feather\n ...\n with get_handle(\n path, \"wb\", storage_options=storage_options, is_text=False\n ) as handles:\n feather.write_feather(df, handles.handle, **kwargs)\n```\n\nIt calls `get_handle` first, but that doesn't change handles, it just wraps them. Then it calls `write_feather` with that same handle. But `write_feather` does not accept a handle, so we have no right to expect this to work. The fact that it does at all is an undocumented feature of `write_feather`.\n\n> If the object does not have a `closed` attribute, then it is not implementing [IOBase](https://docs.python.org/3/library/io.html#io.IOBase), and therefore is not file-like.\n\nFair point, I wasn't aware of that connection, I read it as \"[any] object implementing a binary write() function.\"",
"Is it possible to provide a reproducer that fails and implements `IOBase`? If not, I do not think we should change the documentation here.",
"I expect it works with any object implementing `IOBase`. If you don't want to change the docs then there's nothing to do here."
] |
3,295,672,897
| 62,056
|
ENH: pd.DataFrame.describe(): rename top to mode
|
closed
| 2025-08-06T08:09:24
| 2025-08-06T15:32:21
| 2025-08-06T15:32:21
|
https://github.com/pandas-dev/pandas/issues/62056
| true
| null | null |
Krisselack
| 1
|
### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
In pd.DataFrame.describe(), the most frequent value is termed 'top'.
> The top is the most common value.
But there exists a statistical term 'mode' (https://en.wikipedia.org/wiki/Mode_(statistics)) depicting the same.
To reduce disambiguity I propose to rename _top_ to _mode_, both in the docs as well as in the print-out of the function.
### Feature Description
I guess it would start here (replacing top with mode):
```
def describe_categorical_1d(
data: Series,
percentiles_ignored: Sequence[float],
) -> Series:
"""Describe series containing categorical data.
Parameters
----------
data : Series
Series to be described.
percentiles_ignored : list-like of numbers
Ignored, but in place to unify interface.
"""
names = ["count", "unique", "mode", "freq"]
objcounts = data.value_counts()
count_unique = len(objcounts[objcounts != 0])
if count_unique > 0:
mode, freq = objcounts.index[0], objcounts.iloc[0]
dtype = None
else:
# If the DataFrame is empty, set 'mode' and 'freq' to None
# to maintain output shape consistency
mode, freq = np.nan, np.nan
dtype = "object"
result = [data.count(), count_unique, mode, freq]
from pandas import Series
return Series(result, index=names, name=data.name, dtype=dtype)
```
### Alternative Solutions
Leave as it is.
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the suggestion but this has been long standing behavior and would be a large breaking change for users expecting \"top\". I would suggest renaming this label if you prefer \"mode\". Closing"
] |
3,295,060,237
| 62,055
|
BUG: Fix all-NaT when ArrowEA.astype to categorical
|
open
| 2025-08-06T03:33:17
| 2025-08-18T17:33:14
| null |
https://github.com/pandas-dev/pandas/pull/62055
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62055
|
https://github.com/pandas-dev/pandas/pull/62055
|
arthurlw
| 3
|
- [x] closes #62051 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] ~Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Using `astype` with `CategoricalDtype` coerces values into `Index`, where the code tries to compare two `Index` objects with numpy and pyarrow types. \r\n\r\nIn the first example, the code does not think they are comparable in `Index._should_compare`, and returns all -1s. In the second example, the code correctly determines that they are comparable, but they try to compare it (coercing both indexes to object dtypes with `Index._find_common_type_compat`) and fail. Hence, both cases return all -1s. ",
"So we should start by fixing should_compare?",
"> So we should start by fixing should_compare?\n\nSorry I missed this earlier. The fix I pushed addresses that and resolves the second issue too. "
] |
3,294,962,830
| 62,054
|
DOC: Inform users that incomplete reports will generally be closed
|
closed
| 2025-08-06T02:25:09
| 2025-08-06T15:35:21
| 2025-08-06T15:35:15
|
https://github.com/pandas-dev/pandas/pull/62054
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62054
|
https://github.com/pandas-dev/pandas/pull/62054
|
rhshadrach
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Admin"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @rhshadrach "
] |
3,294,821,413
| 62,053
|
BUG: read_csv with engine=pyarrow and numpy-nullable dtype
|
closed
| 2025-08-06T00:50:10
| 2025-08-12T18:11:58
| 2025-08-12T17:42:27
|
https://github.com/pandas-dev/pandas/pull/62053
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62053
|
https://github.com/pandas-dev/pandas/pull/62053
|
jbrockmendel
| 8
|
- [x] closes #56136 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Also makes this code path robust to always-distinguish behavior in #62040
|
[
"IO CSV"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> From the original issue, do you know where we are introducing float to lose precision when wanting the result type to be int?\r\n\r\nIn `arrow_table_to_pandas` the pyarrow[int64] columns get converted to np.float64, then in finalize_pandas_output that gets cast back to Int64.\r\n",
"OK I see, it's `pyarrow.Table.to_pandas` casting the int to float when there's `null`.\r\n\r\nWhat if in `arrow_table_to_pandas`, we always provide fallback `type_mapper={pyarrow ints : pandas nullable ints}` to avoid the lossy conversions, then afterwards we cast the pandas nullable ints to the appropriate type?",
"That’s basically what this is currently doing, just not in that function since it is also called from other places.\r\n\r\nI’m out of town for a few days. If you feel strongly that this logic should live inside that function I’ll move it when I get back",
"Looking at this again, I'm skeptical of moving the logic into arrow_table_to_pandas. The trouble is that between the table.to_pandas() and the .astype conversions, we have to do a bunch of other csv-keyword-specific stuff like set_index and column renaming. (Just opened #62087 to clean that up a bit). Shoe-horning all of that into arrow_table_to_pandas would make it a really big function in a way that i think is a net negative.",
"Sorry in https://github.com/pandas-dev/pandas/pull/62053#issuecomment-3160877705, I meant for `arrow_table_to_pandas` to just have this change\r\n\r\n```diff\r\ndiff --git a/pandas/io/_util.py b/pandas/io/_util.py\r\nindex 6827fbe9c9..2e15bd3749 100644\r\n--- a/pandas/io/_util.py\r\n+++ b/pandas/io/_util.py\r\n@@ -85,7 +85,14 @@ def arrow_table_to_pandas(\r\n else:\r\n types_mapper = None\r\n elif dtype_backend is lib.no_default or dtype_backend == \"numpy\":\r\n- types_mapper = None\r\n+ # Avoid lossy conversion to float64\r\n+ # Caller is responsible for converting to numpy type if needed\r\n+ types_mapper = {\r\n+ pa.int8(): pd.Int8Dtype(),\r\n+ pa.int16(): pd.Int16Dtype(),\r\n+ pa.int32(): pd.Int32Dtype(),\r\n+ pa.int64(): pd.Int64Dtype(),\r\n+ }\r\n else:\r\n raise NotImplementedError\r\n```\r\n\r\nAnd then each IO parser is responsible for manipulating this result based on the IO arguments",
"> And then each IO parser is responsible for manipulating this result based on the IO arguments\r\n\r\nThat would mean adding that logic to each of the 7 places where arrow_table_to_pandas is called, so we would almost-certainly be better off having it centralized.\r\n\r\nIf we get #62087 in then moving all the logic into arrow_table_to_pandas at least gets a little bit less bulky, so I can give it a try.\r\n",
"This is plausibly nicer.",
"Thanks @jbrockmendel "
] |
3,294,677,918
| 62,052
|
BUG: Pandas mean function fails on type pd.NA with axis=1 but not axis=0
|
closed
| 2025-08-05T23:13:56
| 2025-08-05T23:16:41
| 2025-08-05T23:16:41
|
https://github.com/pandas-dev/pandas/issues/62052
| true
| null | null |
peterktrinh
| 0
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
df = pd.DataFrame({'a': [1.0, pd.NA, 3.0], 'b': [4.0, 5.0, pd.NA]}, dtype='Float64')
df.mean(axis=0, skipna=False) # works
df.mean(axis=1, skipna=False) # fails
```
### Issue Description
the treatment of type pd.NA is inconsistent across rows versus down columns.
### Expected Behavior
the mean across rows should resolve to type pd.NA when at least one of the values in any column is of type pd.NA
### Installed Versions
pandas : 1.5.3
numpy : 1.24.4
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,294,011,688
| 62,051
|
BUG: ArrowEA.astype to categorical returning all-NaT
|
open
| 2025-08-05T18:15:40
| 2025-08-06T03:29:06
| null |
https://github.com/pandas-dev/pandas/issues/62051
| true
| null | null |
jbrockmendel
| 1
|
```python
arr = pd.array(
["2017-01-01", "2018-01-01", "2019-01-01"],
dtype="date32[day][pyarrow]"
)
cats = pd.Index(['2017-01-01', '2018-01-01', '2019-01-01'], dtype="M8[s]")
dtype = pd.CategoricalDtype(cats, ordered=False)
arr.astype(cats.dtype) # <- works
arr.astype(dtype) # <- all-NaT
arr = pd.core.arrays.ArrowExtensionArray._from_sequence(["1h", "2h", "3h"])
cats = pd.Index(['1h', '2h', '3h'], dtype="m8[ns]")
dtype = pd.CategoricalDtype(cats, ordered=False)
arr.astype(cats.dtype) # <- works
arr.astype(dtype) # <- all-NaT
```
|
[
"Bug",
"Dtype Conversions",
"Categorical",
"datetime.date",
"Arrow"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take\n"
] |
3,293,913,873
| 62,050
|
ENH: String functions for df.aggregate()
|
open
| 2025-08-05T17:36:41
| 2025-08-24T12:28:07
| null |
https://github.com/pandas-dev/pandas/issues/62050
| true
| null | null |
JustusKnnck
| 2
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I wish I could use string functions like "first" and "last" when aggregating a dataframe just like they are used when aggregating a gorupby-object.
### Feature Description
The goal is to allow "first" and "last" as valid aggregation strings in DataFrame.agg() and Series.agg() without requiring a groupby.
Implementation idea:
Currently, Series.agg() checks if the passed function name is a valid aggregation from NumPy or Pandas’ reduction methods. We can extend this logic to explicitly map "first" and "last" to the first and last elements of the Series.
Pseudocode:
# Inside Series.agg() (simplified)
if isinstance(func, str):
if func == "first":
return self.iloc[0]
if func == "last":
return self.iloc[-1]
# existing code follows...
Expected behavior after change:
df = pd.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c":[7,8,9]})
aggregations = {"a": "sum", "b": "first", "c": "last"}
df.agg(aggregations)
# Returns:
a 6
b 4
c 9
This would align the behavior with groupby().agg(), which already supports "first" and "last".
### Alternative Solutions
aggregations = {col: ("sum" if col in sumcols else (lambda x: x.iloc[-1])) for col in df.columns}
df.agg(aggregations)
### Additional Context
_No response_
|
[
"Enhancement",
"Needs Discussion",
"Reduction Operations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the request. Prior to 3.0, pandas already has `Series.first` but it only works with time series and has a required `offset` argument. This was deprecated and will be removed in 3.0, so we would be able to add `Series.first` to get the first element of the Series (similarly with last). I'm positive on this - I think it can be convenient in method chaining and makes for a more consistent API in addition to the use case with `agg`.\n\nWe need to decide the behavior on an empty Series. The three obvious options to me are (a) raise, (b) NA-value for the dtype, or (c) `None`. I would lean toward (b) here.\n\nI also think we shouldn't add such function until at least pandas 3.1, and really even later than that.",
"I fully support this enhancement! Allowing `\"first\"` and `\"last\"` as valid aggregation strings in `DataFrame.agg()` and `Series.agg()` would make the API more consistent with `groupby().agg()` and simplify many common workflows. \n\nA few points to note: \n\n- **Consistency:** `groupby().agg()` already supports `\"first\"` and `\"last\"`, so this would make aggregation behavior uniform across DataFrame and Series. \n- **Empty Series:** Returning an `NA` value for the dtype seems safest and keeps method chaining smooth, instead of raising errors or returning `None`. \n- **Implementation:** Explicitly mapping `\"first\"` to `self.iloc[0]` and `\"last\"` to `self.iloc[-1]` inside `Series.agg()` is straightforward and avoids the need for lambda functions like `lambda x: x.iloc[0]` or `lambda x: x.iloc[-1]`. \n- **Versioning:** Waiting until pandas 3.1+ is prudent, given the deprecation of the old `Series.first` method with the offset argument. \n\nOverall, this change would improve usability, consistency, and readability in DataFrame and Series aggregation workflows.\n"
] |
3,293,700,389
| 62,049
|
REF: simplify mask_missing
|
closed
| 2025-08-05T16:28:39
| 2025-08-05T17:21:34
| 2025-08-05T17:19:35
|
https://github.com/pandas-dev/pandas/pull/62049
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62049
|
https://github.com/pandas-dev/pandas/pull/62049
|
jbrockmendel
| 1
|
In the past this had to handle list-like values_to_mask but that is no longer the case, so this can be simplified a bit. The edit in dtypes.common makes `dtype.kind in ...` checks very slightly faster.
|
[
"Refactor"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,293,566,587
| 62,048
|
API: Series[Float64] == False
|
closed
| 2025-08-05T15:35:11
| 2025-08-06T02:34:49
| 2025-08-06T02:34:49
|
https://github.com/pandas-dev/pandas/issues/62048
| true
| null | null |
jbrockmendel
| 2
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
ser = pd.Series([0], dtype="Float64")
>>> ser == False
0 True
dtype: boolean
```
### Issue Description
NA
### Expected Behavior
I would expect this to be stricter in type-safety. The lack of strictness necessitates special-casing in mask_missing (called from Block.replace).
Note that these also compare as equal for numpy float64 and float64[pyarrow]
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This impacts all dtypes (bool, int, float) with all storage types (NumPy, PyArrow) I believe. NumPy and Polars also behave the same way - giving `True` as the result. While I don't necessarily disagree with the idea of not allowing Boolean and float to compare as equal, it seems like we might be creating a new special behavior by doing so.",
"Fair point, not worth it for the foreseeable future. Closing."
] |
3,293,532,273
| 62,047
|
BUG: failing when groupby on data containing bytes
|
closed
| 2025-08-05T15:23:14
| 2025-08-06T01:52:22
| 2025-08-06T01:51:55
|
https://github.com/pandas-dev/pandas/issues/62047
| true
| null | null |
cderemble
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import numpy as np
import pandas as pd
pd.Series(np.array([b""])).groupby(level=0).last()
```
### Issue Description
when calling `groupby` on a frame or series containing bytes, an exception is raised:
`AttributeError: 'numpy.dtypes.BytesDType' object has no attribute 'construct_array_type'`
### Expected Behavior
Normal groupby behaviour
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.13.5
python-bits : 64
OS : Linux
OS-release : 4.18.0-425.3.1.el8.x86_64
Version : #1 SMP Fri Sep 30 11:45:06 EDT 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 2.3.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Duplicate Report",
"Error Reporting",
"Constructors"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. pandas does not support NumPy byte arrays, only bytes objects stored in an object dtype array. This should error on Series construction, which is #60108. Closing as a duplicate."
] |
3,291,394,782
| 62,046
|
BUG Updated border attribute to in-line CSS
|
open
| 2025-08-05T03:13:19
| 2025-08-08T16:02:17
| null |
https://github.com/pandas-dev/pandas/pull/62046
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62046
|
https://github.com/pandas-dev/pandas/pull/62046
|
bennychenOSU
| 0
|
- [ x] closes #61949
- [ x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,291,011,670
| 62,045
|
DOC: updated BooleanDType docstring
|
open
| 2025-08-04T22:56:08
| 2025-08-06T17:44:32
| null |
https://github.com/pandas-dev/pandas/pull/62045
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62045
|
https://github.com/pandas-dev/pandas/pull/62045
|
saguaro1234
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ check] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,290,923,808
| 62,044
|
DOC: BooleanDType docstring update
|
closed
| 2025-08-04T22:11:05
| 2025-08-04T22:15:16
| 2025-08-04T22:15:16
|
https://github.com/pandas-dev/pandas/pull/62044
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62044
|
https://github.com/pandas-dev/pandas/pull/62044
|
saguaro1234
| 0
|
- [61939 ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [check ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,290,359,243
| 62,043
|
API: rank with nullable dtypes preserve NA
|
closed
| 2025-08-04T18:09:09
| 2025-08-04T20:53:06
| 2025-08-04T20:43:51
|
https://github.com/pandas-dev/pandas/pull/62043
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62043
|
https://github.com/pandas-dev/pandas/pull/62043
|
jbrockmendel
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"NA - MaskedArrays"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,290,320,726
| 62,042
|
REF: Avoid/defer `dtype=object` containers in plotting
|
closed
| 2025-08-04T17:53:54
| 2025-08-05T16:16:39
| 2025-08-05T14:52:08
|
https://github.com/pandas-dev/pandas/pull/62042
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62042
|
https://github.com/pandas-dev/pandas/pull/62042
|
mroeschke
| 3
|
Probably best to avoid operations on containers with these types unless needed/expected
|
[
"Visualization"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pandas/plotting/_matplotlib/boxplot.py:246: error: Argument \"labels\" to \"_set_ticklabels\" has incompatible type \"list[Hashable]\"; expected \"list[str]\" [arg-type]\r\n",
"Seems benign",
"thanks @mroeschke "
] |
3,290,085,052
| 62,041
|
[pre-commit.ci] pre-commit autoupdate
|
closed
| 2025-08-04T16:30:39
| 2025-08-04T20:37:25
| 2025-08-04T20:37:22
|
https://github.com/pandas-dev/pandas/pull/62041
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62041
|
https://github.com/pandas-dev/pandas/pull/62041
|
pre-commit-ci[bot]
| 0
|
<!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.12.2 → v0.12.7](https://github.com/astral-sh/ruff-pre-commit/compare/v0.12.2...v0.12.7)
- [github.com/pre-commit/mirrors-clang-format: v20.1.7 → v20.1.8](https://github.com/pre-commit/mirrors-clang-format/compare/v20.1.7...v20.1.8)
- [github.com/trim21/pre-commit-mirror-meson: v1.8.2 → v1.8.3](https://github.com/trim21/pre-commit-mirror-meson/compare/v1.8.2...v1.8.3)
<!--pre-commit.ci end-->
|
[
"Code Style"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,289,945,128
| 62,040
|
API: mode.nan_is_na to consistently distinguish NaN-vs-NA
|
open
| 2025-08-04T15:42:47
| 2025-08-23T15:32:11
| null |
https://github.com/pandas-dev/pandas/pull/62040
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62040
|
https://github.com/pandas-dev/pandas/pull/62040
|
jbrockmendel
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
As discussed on the last dev call, this implements `"mode.nan_is_na"` (default `True`) to consider NaN as either always-equivalent or never-equivalent to NA.
This sits on top of
- #62021, which trims the diff here by updating some tests to use NA instead of NaN.
- #61732 which implements the option but only for pyarrow dtypes.
- #62038 which addresses an issue in `DataFrame.where`
- #62053 which addresses a kludge in read_csv with engine="pyarrow"
Still need to
- [x] Add docs for the new option, including whatsnew section
- [x] deal with a kludge in algorithms.rank; fixed by #62043
- [x] deal with a kludge in read_csv with engine="pyarrow"; fixed by #62053
- [ ] Add tests for the issues this addresses
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Discussed in the dev call before last where I, @mroeschke, and @Dr-Irv were +1. Joris was unenthused but \"not necessarily opposed\". On slack @rhshadrach expressed a +1. All those opinions were to the concept, not the execution."
] |
3,289,017,015
| 62,039
|
Bump pypa/cibuildwheel from 3.1.1 to 3.1.3
|
closed
| 2025-08-04T11:15:30
| 2025-08-04T16:32:56
| 2025-08-04T16:32:52
|
https://github.com/pandas-dev/pandas/pull/62039
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62039
|
https://github.com/pandas-dev/pandas/pull/62039
|
dependabot[bot]
| 0
|
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 3.1.1 to 3.1.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p>
<blockquote>
<h2>v3.1.3</h2>
<ul>
<li>🐛 Fix bug where "latest" dependencies couldn't update to pip 25.2 on Windows (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2537">#2537</a>)</li>
<li>🛠 Use pytest-rerunfailures to improve some of our iOS/Android tests (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2527">#2527</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2539">#2539</a>)</li>
<li>🛠 Remove some GraalPy Windows workarounds in our tests (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2501">#2501</a>)</li>
</ul>
<h2>v3.1.2</h2>
<ul>
<li>⚠️ Add an error if <code>CIBW_FREE_THREADING_SUPPORT</code> is set; you are likely missing 3.13t wheels, please use the <code>enable</code>/<code>CIBW_ENABLE</code> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2520">#2520</a>)</li>
<li>🛠 <code>riscv64</code> now enabled if you target that architecture, it's now supported on PyPI (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2509">#2509</a>)</li>
<li>🛠 Add warning when using <code>cpython-experimental-riscv64</code> (no longer needed) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2526">#2526</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2528">#2528</a>)</li>
<li>🛠 iOS versions bumped, fixing issues with 3.14 (now RC 1) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2530">#2530</a>)</li>
<li>🐛 Fix bug in Android running wheel from our GitHub Action (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2517">#2517</a>)</li>
<li>🐛 Fix warning when using <code>test-skip</code> of <code>"*-macosx_universal2:arm64"</code> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2522">#2522</a>)</li>
<li>🐛 Fix incorrect number of wheels reported in logs, again (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2517">#2517</a>)</li>
<li>📚 We welcome our Android platform maintainer (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2516">#2516</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p>
<blockquote>
<h3>v3.1.3</h3>
<p><em>1 August 2025</em></p>
<ul>
<li>🐛 Fix bug where "latest" dependencies couldn't update to pip 25.2 on Windows (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2537">#2537</a>)</li>
<li>🛠 Use pytest-rerunfailures to improve some of our iOS/Android tests (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2527">#2527</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2539">#2539</a>)</li>
<li>🛠 Remove some GraalPy Windows workarounds in our tests (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2501">#2501</a>)</li>
</ul>
<h3>v3.1.2</h3>
<p><em>29 July 2025</em></p>
<ul>
<li>⚠️ Add an error if <code>CIBW_FREE_THREADING_SUPPORT</code> is set; you are likely missing 3.13t wheels, please use the <code>enable</code>/<code>CIBW_ENABLE</code> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2520">#2520</a>)</li>
<li>🛠 <code>riscv64</code> now enabled if you target that architecture, it's now supported on PyPI (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2509">#2509</a>)</li>
<li>🛠 Add warning when using <code>cpython-experimental-riscv64</code> (no longer needed) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2526">#2526</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2528">#2528</a>)</li>
<li>🛠 iOS versions bumped, fixing issues with 3.14 (now RC 1) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2530">#2530</a>)</li>
<li>🐛 Fix bug in Android running wheel from our GitHub Action (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2517">#2517</a>)</li>
<li>🐛 Fix warning when using <code>test-skip</code> of <code>"*-macosx_universal2:arm64"</code> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2522">#2522</a>)</li>
<li>🐛 Fix incorrect number of wheels reported in logs, again (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2517">#2517</a>)</li>
<li>📚 We welcome our Android platform maintainer (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2516">#2516</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pypa/cibuildwheel/commit/352e01339f0a173aa2a3eb57f01492e341e83865"><code>352e013</code></a> Bump version: v3.1.3</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/c463e56ba22f7f7e6c8871b006a06384c08cff34"><code>c463e56</code></a> tests: another iOS flaky spot (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2539">#2539</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/8c5c738023fee8aad6412105b42ea798066b1438"><code>8c5c738</code></a> docs(project): add Falcon to working examples (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2538">#2538</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/feeb3992a7ea36ffbc9d4446debea40f9aa24861"><code>feeb399</code></a> tests: add flaky test handling (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2527">#2527</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/60b9cc95db51f9f5e48562fcb1b3f7ac3f9cb4a1"><code>60b9cc9</code></a> fix: never call pip directly (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2537">#2537</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/e2c7102ed7981cd79d28a5eb0a196f8242b1adab"><code>e2c7102</code></a> chore: remove some GraalPy Windows workarounds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2501">#2501</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/9e4e50bd76b3190f55304387e333f6234823ea9b"><code>9e4e50b</code></a> Bump version: v3.1.2</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/8ef9414f60b366420233447f0abd96586ed394c7"><code>8ef9414</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2532">#2532</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/1953c0497215dcf2711e1fbfd3ae8952e8ad604c"><code>1953c04</code></a> Adding <a href="https://github.com/mhsmith"><code>@mhsmith</code></a> as platform maintainer for Android (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2516">#2516</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/46a6d279953e2947496fa28a22ded264f4027a5f"><code>46a6d27</code></a> Bump iOS support package versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2530">#2530</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pypa/cibuildwheel/compare/v3.1.1...v3.1.3">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
|
[
"CI",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,287,540,739
| 62,038
|
API: improve dtype in df.where with EA other
|
closed
| 2025-08-03T21:08:57
| 2025-08-05T02:03:38
| 2025-08-05T01:40:28
|
https://github.com/pandas-dev/pandas/pull/62038
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62038
|
https://github.com/pandas-dev/pandas/pull/62038
|
jbrockmendel
| 9
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Improves the patch-job done by #38742. Also makes the affected test robust to always-distinguish NAN-vs-NA behavior.
|
[
"Dtype Conversions",
"Error Reporting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Looks like [`87d5fdf`](https://github.com/pandas-dev/pandas/pull/62038/commits/87d5fdfd1983c6408033f22009bab7b5b0d1be07) undid all your changes before",
"Woops. Looks better now.",
"```\r\nruff format.............................................................................................Failed\r\n- hook id: ruff-format\r\n- files were modified by this hook\r\n```\r\n\r\nAny idea how to make it tell me what it wants to change?",
"You can comment `pre-commit.ci autofix` in this PR to get pre-commit to add a commit to fix it for you.\r\n\r\nOtherwise if you have the pre-commit hooks installed, to show the fixes you'll probably need to add `--show-fixes` on this line \r\n\r\nhttps://github.com/pandas-dev/pandas/blob/84757581420cfcc79448aa3274e28d90aaf75c87/.pre-commit-config.yaml#L25",
"running `ruff format` locally on the affected files says it leaves them unchanged",
"What about running `pre-commit run ruff --all-files`? (Generally `pre-commit run <id to check>` is the source of truth for the linting checks)",
"That did it, thanks. Lets see if the CI agrees",
"Booyah, did it. Thanks for help troubleshooting",
"Np! Thanks @jbrockmendel "
] |
3,287,397,246
| 62,037
|
"BUG: Fix repeated rolling mean assignment causing all-NaN values"
|
closed
| 2025-08-03T17:44:44
| 2025-08-03T18:32:10
| 2025-08-03T18:32:09
|
https://github.com/pandas-dev/pandas/pull/62037
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62037
|
https://github.com/pandas-dev/pandas/pull/62037
|
abujabarmubarak
| 3
|
## Fix repeated rolling mean assignment causing all-NaN values
- Closes #<issue_number> (if there’s an issue, otherwise leave this out)
- This PR fixes a regression where assigning the result of `.rolling().mean()` to a DataFrame column more than once caused all values in the column to become NaN (see pandas-dev/pandas#61841).
- The bug was due to pandas reusing memory blocks when overwriting an existing column with a rolling result Series, leading to incorrect block alignment.
- The fix is to make a defensive `.copy()` of the Series when overwriting an existing column, ensuring correct assignment.
### Example
```python
df = pd.DataFrame({"A": range(30)})
df["SMA"] = df["A"].rolling(20).mean()
df["SMA"] = df["A"].rolling(20).mean()
print(df["SMA"].notna().sum()) # should be > 0, not all NaN
```
### Tests
- Added a regression test in `pandas/tests/window/test_rolling.py`.
- All tests pass locally.
---
Thanks for your consideration!
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Is this AI? The claims about tests in the OP are obviously false.",
"@mroeschke can we block a person? Looking at their PR history it has “AI spam” written all over it",
"Agreed, blocking and closing this PR"
] |
3,287,371,818
| 62,036
|
BUG: rank with object dtype and small values
|
open
| 2025-08-03T17:18:34
| 2025-08-15T15:16:00
| null |
https://github.com/pandas-dev/pandas/issues/62036
| true
| null | null |
jbrockmendel
| 1
|
```python
# Based on test_rank_ea_small_values
import pandas as pd
ser = pd.Series(
[5.4954145e29, -9.791984e-21, 9.3715776e-26, pd.NA, 1.8790257e-28],
dtype="Float64",
)
ser2 = ser.astype(object)
>>> ser.rank(method="min")
0 4.0
1 1.0
2 3.0
3 NaN
4 2.0
dtype: float64
>>> ser2.rank(method="min")
0 4.0
1 1.0
2 1.0
3 NaN
4 1.0
dtype: float64
```
I'd expect 1) the values to match and 2) to get NA rather than NaN at least for the Float64 case.
Update: if we convert to float64[pyarrow] first we do get NA back and a uint64[pyarrow] dtype.
|
[
"Bug",
"Missing-data",
"NA - MaskedArrays",
"Transformations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Take"
] |
3,287,304,965
| 62,035
|
BUG: raise a proper exception when str.rsplit is passed a regex and clarify the docs
|
open
| 2025-08-03T16:07:18
| 2025-08-04T18:09:04
| null |
https://github.com/pandas-dev/pandas/pull/62035
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62035
|
https://github.com/pandas-dev/pandas/pull/62035
|
hamdanal
| 0
|
- [x] closes #29633
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Noticed this while working on https://github.com/pandas-dev/pandas-stubs/pull/1278. `rsplit` doesn't accept regular expressions but was silently accepting them and producing bad results. I added a check on the type of the input and updated the documentation.
|
[
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,287,188,882
| 62,034
|
Ignore this
|
closed
| 2025-08-03T13:42:03
| 2025-08-03T14:39:35
| 2025-08-03T14:39:35
|
https://github.com/pandas-dev/pandas/pull/62034
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62034
|
https://github.com/pandas-dev/pandas/pull/62034
|
0x3vAD
| 0
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,287,166,970
| 62,033
|
BUG: Wrong inferred type in case of a mixture of boolean, float and integers
|
open
| 2025-08-03T13:15:29
| 2025-08-05T11:03:33
| null |
https://github.com/pandas-dev/pandas/issues/62033
| true
| null | null |
tqa236
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> import pandas as pd
>>> idx = pd.Index(pd.array([1., True, 2., 3., 4]))
>>> idx.inferred_type # Wrong, should be mixed
'mixed-integer'
>>> idx = pd.Index(pd.array([1., True, 2., 3., 4.]))
>>> idx.inferred_type # Correct
'mixed'
>>> idx = pd.Index(pd.array([1, True, 2, 3, 4]))
>>> idx.inferred_type # Correct
'mixed-integer'
```
### Issue Description
While exploring https://github.com/pandas-dev/pandas/issues/61709, I noticed this strange behavior: In case of a mixture of boolean, float and integers, the inferred type is "mixed-integer" and not "mixed"
### Expected Behavior
"mixed" inferred type when there are floats, integers and booleans.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc9b21c9ad9b3df0f084b6d2e8462b1b78d4e8a
python : 3.10.16
python-bits : 64
OS : Linux
OS-release : 6.6.87.2-microsoft-standard-WSL2
Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+1904.g2cc9b21c9a
numpy : 1.26.4
dateutil : 2.9.0.post0
pip : 25.0
Cython : 3.0.11
sphinx : 8.1.3
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
bottleneck : 1.4.2
fastparquet : 2024.11.0
fsspec : 2025.2.0
html5lib : 1.1
hypothesis : 6.125.1
gcsfs : 2025.2.0
jinja2 : 3.1.5
lxml.etree : 5.3.0
matplotlib : 3.10.0
numba : 0.61.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
psycopg2 : 2.9.9
pymysql : 1.4.6
pyarrow : 19.0.0
pyiceberg : None
pyreadstat : 1.2.8
pytest : 8.3.4
python-calamine : None
pytz : 2025.1
pyxlsb : 1.0.10
s3fs : 2025.2.0
scipy : 1.15.1
sqlalchemy : 2.0.37
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.9.0
xlrd : 2.0.1
xlsxwriter : 3.2.2
zstandard : 0.23.0
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Dtype Conversions"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Confirmed on main. PRs are welcome!\n\nThanks for raising!",
"Look like the definition is not super clear here and changing it can touch a lot of places. \n\nThis is the current definition. I guess the implementation follows it.\n\n```\n - 'mixed' is the catchall for anything that is not otherwise\n specialized\n - 'mixed-integer-float' are floats and integers\n - 'mixed-integer' are integers mixed with non-integers\n```\n\nI guess it should be something like this:\n\n```\n - 'mixed' is the catchall for anything that is not otherwise\n specialized\n - 'mixed-integer-float' are floats and integers and other non-integers\n - 'mixed-integer' are integers mixed with non-integers but no floats\n```\n\nOr if we want to keep 'mixed-integer-float' to only integers and floats, we can potentially introduce a 'mixed-float': floats mixed with non-floats but no integers",
"take"
] |
3,286,989,398
| 62,032
|
EHN: return early when the result is None
|
closed
| 2025-08-03T09:13:48
| 2025-08-04T16:48:27
| 2025-08-04T16:48:22
|
https://github.com/pandas-dev/pandas/pull/62032
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62032
|
https://github.com/pandas-dev/pandas/pull/62032
|
zhiqiangxu
| 3
|
There's no need to continue the loop when the result is destined to be None.
|
[
"Index"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I prefer single-return, but not a huge deal",
"> I prefer single-return, but not a huge deal\r\n\r\nSwitched to using `break` instead.",
"Thanks @zhiqiangxu "
] |
3,286,799,825
| 62,031
|
API: timestamp resolution inference: default to microseconds when possible
|
open
| 2025-08-03T07:36:52
| 2025-08-15T23:06:40
| null |
https://github.com/pandas-dev/pandas/pull/62031
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62031
|
https://github.com/pandas-dev/pandas/pull/62031
|
jorisvandenbossche
| 3
|
Draft PR for https://github.com/pandas-dev/pandas/issues/58989/
This should already make sure that we consistently use 'us' when converting non-numeric data in `pd.to_datetime` and `pd.Timestamp`, but if we want to do this, this PR still requires updating lots of tests and docs (and whatsnew) and cleaning up.
Currently the changes here will ensure that we use microseconds more consistently when inferring the resolution while creating datetime64 data. Exceptions: if the data don't fit in the range of us (either because out of bounds (use ms or s) or because it has nanoseconds or below (use ns)), or if the input data already has a resolution defined (for Timestamp objects, or numpy datetime64 data).
|
[
"Datetime",
"Non-Nano",
"Timestamp"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@jbrockmendel would you have time to give this a review?",
"Yes, but its in line behind a few other reviews i owe.",
"Small comments, no complaints about the approach. Haven't looked at the tests yet since i don't expect any surprises; will do so once green."
] |
3,286,617,297
| 62,030
|
BUG: Catch TypeError in _is_dtype_type when converting abstract numpy types (#62018)
|
open
| 2025-08-03T03:05:01
| 2025-08-03T16:43:53
| null |
https://github.com/pandas-dev/pandas/pull/62030
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62030
|
https://github.com/pandas-dev/pandas/pull/62030
|
abhaypsingh
| 1
|
- Closes #62018
- Wrap np.dtype() call in try/except to handle abstract numpy types (e.g. np.floating, np.inexact).
- On TypeError, return condition(type(None)) to indicate mismatch rather than raising.
This prevents `is_signed_integer_dtype` and similar functions from raising on abstract NumPy classes and restores expected behaviour.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is a small perf hit that will add up in a ton of places, all for something that we shouldn't expect to work anyway."
] |
3,286,518,961
| 62,029
|
DOC: fix mask/where docstring alignment note (#61781)
|
open
| 2025-08-02T23:44:06
| 2025-08-03T16:49:01
| null |
https://github.com/pandas-dev/pandas/pull/62029
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62029
|
https://github.com/pandas-dev/pandas/pull/62029
|
abhaypsingh
| 1
|
The explanatory paragraph wrongly said that alignment is between `other` and `cond`. It is between *self* and `cond`; values fall back to *self* for mis-aligned positions. Update both generic docstring templates so all Series/DataFrame variants inherit the correct wording.
Closes #61781
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"How do I merge this? "
] |
3,286,290,983
| 62,028
|
TST: Speed up hypothesis and slow tests
|
closed
| 2025-08-02T20:49:14
| 2025-08-03T18:28:15
| 2025-08-03T16:45:53
|
https://github.com/pandas-dev/pandas/pull/62028
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62028
|
https://github.com/pandas-dev/pandas/pull/62028
|
mroeschke
| 1
|
* Hypothesis tests seems to be the slowest running tests in CI. Limiting the `max_examples` IMO is OK as we're looking to exercise some edge cases
* Avoiding some work being done in `test_*number_of_levels_larger_than_int32` as we're just looking to check a warning
|
[
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"thanks @mroeschke "
] |
3,286,157,068
| 62,027
|
BUG: Fix DataFrame reduction to preserve NaN vs <NA> in mixed dtypes (GH#62024)
|
closed
| 2025-08-02T17:49:23
| 2025-08-02T22:34:45
| 2025-08-02T22:34:45
|
https://github.com/pandas-dev/pandas/pull/62027
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62027
|
https://github.com/pandas-dev/pandas/pull/62027
|
Aniketsy
| 2
|
(GH#62024)
This PR fixes a bug in (DataFrame._reduce) where reductions on (DataFrames) with mixed dtypes (e.g., float64 and nullable integer Int64) would incorrectly upcast all results to use pd.NA and the Float64 dtype if any column was a pandas extension type.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou!
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is not the correct approach. Please look for issues with the Good First Issue label.",
"Thankyou for feedback . I will look for good first issues"
] |
3,286,155,407
| 62,026
|
BUG: groupby.idxmin/idxmax with all NA values should raise
|
closed
| 2025-08-02T17:47:07
| 2025-08-05T18:55:50
| 2025-08-05T18:55:42
|
https://github.com/pandas-dev/pandas/pull/62026
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62026
|
https://github.com/pandas-dev/pandas/pull/62026
|
rhshadrach
| 2
|
- [x] closes #57745
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Groupby",
"API - Consistency",
"Reduction Operations"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Couple of small questions, assuming no surprises on those: LGTM",
"Thanks @rhshadrach "
] |
3,286,153,785
| 62,025
|
BUG: Change default of observed in Series.groupby
|
closed
| 2025-08-02T17:44:40
| 2025-08-02T20:52:31
| 2025-08-02T20:52:25
|
https://github.com/pandas-dev/pandas/pull/62025
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62025
|
https://github.com/pandas-dev/pandas/pull/62025
|
rhshadrach
| 1
|
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Fix for #57330. In all tests where `observed` makes a difference, we explicitly specify `observed` so this wasn't noticed. The deprecation itself was properly done (saying that we were changing the default to True), it was only the enforcement of the deprecation that had a mistake.
|
[
"Bug",
"Groupby",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @rhshadrach "
] |
3,286,022,429
| 62,024
|
API: NaN vs NA in mixed reduction
|
open
| 2025-08-02T15:38:02
| 2025-08-16T18:58:27
| null |
https://github.com/pandas-dev/pandas/issues/62024
| true
| null | null |
jbrockmendel
| 14
|
```python
df = pd.DataFrame(
{
"B": [1, None, 3],
"C": pd.array([1, None, 3], dtype="Int64"),
}
)
result = df.skew()
>>> result
B <NA>
C <NA>
dtype: Float64
>>> df[["B"]].skew()
B NaN
dtype: float64
```
Based on test_mixed_reductions. The presence of column "C" shouldn't affect the result we get for column "B".
|
[
"Bug",
"Reduction Operations",
"PDEP missing values"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"> ### Expected Behavior\n> NA\n\nI would expect `Float64` dtype with `NaN` and `NA`. One might also argue object, but so far it appears we coerce to the nullable dtypes.\n\n```python\ndf = pd.DataFrame(\n {\n \"B\": [1, 2, 3],\n \"C\": pd.array([1, 2, 3], dtype=\"Float64\"),\n }\n)\nprint(df.sum())\n# B 6.0\n# C 6.0\n# dtype: Float64\n```",
"Hah, in the expected behavior section i put \"NA\" to mean \"I'm not writing anything here\", not pd.NA.",
"I would say that the current behaviour is fine. We indeed currently coerce to the nullable dtype for the result if there are mixed nullable and non-nullable columns, and at that point converting NaN to NA seems the correct thing to do (if you would first cast the original non-nullable column to its nullable dtype, you would get the same result)",
"> I would say that the current behaviour is fine\n\nIt's fine in a never-distinguish world. We're currently in a sometimes-distinguish world in which I had previously thought the rule was \"distinguish when NaNs are introduced through operations\". We can make that rule more complicated \"distinguish when NaNs are introduced through operations but not reductions\", but I do think that makes it a worse rule.\n\n> if you would first cast the original non-nullable column to its nullable dtype\n\nIn this case the user specifically didn't do that.",
"> We indeed currently coerce to the nullable dtype for the result if there are mixed nullable and non-nullable columns\n\nShould coercion to Float64 change NaN to NA?\n\n```python\nser = pd.Series([np.nan, pd.NA], dtype=\"object\")\nprint(ser.astype(\"Float64\"))\n# 0 <NA>\n# 1 <NA>\n# dtype: Float64\n```",
"> Should coercion to Float64 change NaN to NA?\n\nThat is expected behavior ATM, yes. In an always-distinguish world it would not (xref #62040)",
"take",
"@sharkipelago this is not the best issue to try to tackle at this point, as there is not yet a clear action to be taken",
"> > if you would first cast the original non-nullable column to its nullable dtype\n> \n> In this case the user specifically didn't do that.\n\nThe user indeed did not _specifically_ cast themselves, but if you do a reduction operation on a DataFrame with multiple columns (and multiple dtypes), you are always doing an implicit cast. Also if you have an integer and float column, the integer result gets cast to float.\n\nIf you consider a reduction on a DataFrame as reducing each column separately, ad then a concat of the results, then again the current behaviour is kind of the expected behaviour, because concatting float64 and Float64 gives Float64, converting NaNs to NA:\n\n```python\n>>> pd.concat([pd.Series([np.nan], dtype=\"float64\"), pd.Series([pd.NA], dtype=\"Float64\")])\n0 <NA>\n0 <NA>\ndtype: Float64\n```\n\nHere the float64 series gets cast to the common dtype, i.e. Float64, and casting float64 to nullable float64 converts NaNs to NA. You can of course question this casting behaviour, but as you mention in the comment above, this is the expected behaviour at the moment.",
"> [@sharkipelago](https://github.com/sharkipelago) this is not the best issue to try to tackle at this point, as there is not yet a clear action to be taken\n\nUnderstood. Thanks for letting me know. I had mistakenly thought the behaviour should be updated so that (for the skew example above) column \"B\" always becomes NaN regardless of if column \"C\" is present in the DataFrame. Is there a way to un-assign myself?",
"Agree with OP that users will find it surprising that the presence of another column will change the result. But unless we're going to go to object dtype this coercion needs to be a part of general reductions since we go from horizontal to vertical. So I think the \"right\" user expectation is for reducers to compute and then coerce as necessary. Assuming this, current behavior is correct.\n\nI do find coercion converting NaN to NA surprising but that's for a separate issue.",
"Since the current behavior is \"correct\", i'm updating the title from BUG to API. I maintain that if I was surprised, users will be surprised.",
"The solution to all this surprising behaviour is of course to eventually only have dtypes that use NA for missing data, so we don't run in those \"mixed\" dataframes with some NA-based columns and some NaN-based columns (or I think as someone suggested in an earlier related discussion, prohibit such mixed dataframes for now until we get there, but personally I don't think that is going to be practical)",
"> The solution to all this surprising behaviour is of course to eventually only have dtypes that use NA\n\nThat'll be great eventually. I would also consider the issue solved if we got to either \"always distinguish\" or \"never distinguish\" so there wouldn't be any confusion as to when we distinguish."
] |
3,285,544,351
| 62,023
|
continue from #61957 which closed with unmerged commit
|
open
| 2025-08-02T03:19:52
| 2025-08-16T00:35:13
| null |
https://github.com/pandas-dev/pandas/pull/62023
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62023
|
https://github.com/pandas-dev/pandas/pull/62023
|
maddiew95
| 2
|
Using Markup() due to ascii reading on attribute tags. Can check in #61957
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"for #51536 ",
"hi @mroeschke could you advice repacement for Markup safe ?"
] |
3,285,518,070
| 62,022
|
DEPR: convert_dtypes keywords
|
open
| 2025-08-02T02:37:27
| 2025-08-07T02:38:42
| null |
https://github.com/pandas-dev/pandas/issues/62022
| true
| null | null |
jbrockmendel
| 4
|
Looking at the keywords for convert_dtypes I'm wondering if users actually want it for anything other than dtype_backend?
|
[
"Dtype Conversions",
"Deprecate",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I would like to work on this issue.\nCould you please provide more detailed instructions or clarify the specific changes you’re looking for?\nThankyou !",
"It seems to me there are two situations that a user may face:\n\n1. \"I want to find the best dtype to hold my data\". For this, I think the keywords e.g. `convert_integer` makes sense.\n2. \"I want to to take my data and convert all the dtypes to the corresponding pyarrow dtype \". For this I would not have the keywords.\n\nPerhaps these should be separate functions?\n\nI personally think (1) is of questionable use: it's value-specific behavior, and users would get better behavior by converting the data themselves. Still, in ad-hoc analysis type situations I can see the convenience and am okay with keeping it. (2) on the other hand seems highly desirable.",
"FWIW, this method was originally added to convert to _nullable_ dtypes, not specifically to arrow-backed dtypes (that is also still the default behaviour).\n\nAnd I think one of the reasons that we added those keywords initially is that people might eg only wanted to use the nullable integer dtype (because that adds more value) and not necessarily the nullable float or string. At the moment that those dtypes were introduced (experimentally), I think those keywords made sense. But that is less the case right now (and indeed even less so if you specify `dtypes_backend`)\n\n> 1. \"I want to find the best dtype to hold my data\". For this, I think the keywords e.g. convert_integer makes sense.\n\nI don't think that this function actually does that? \nExcept for going from object dtype to a better dtype. For that we also already have a dedicated method `df.infer_objects()`, but `convert_dtypes()` was added on top of that to convert to nullable.\n\nBut for the rest, the function only converts from non-nullable to nullable, it won't actaully \"optimize\" your data types (for example, it won't try to downcast to a smaller bitsize if possible, like one can do in `pd.to_numeric`). \n(there is of course the specific case of casting rounded floats to integer, but that is tied to the aspect of converting to nullable dtypes)",
"Thanks @jorisvandenbossche - makes sense. I think I got these mixed up, especially with the `convert_dtypes` docstring being:\n\n> Convert columns to the best possible dtypes using dtypes supporting `pd.NA`.\n\nI'm definitely good with deprecation of the `convert_*` arguments here. However the `infer_objects=True` option seems like it could be better suited for just using `DataFrame.infer_objects` if we were to add the possibility of converting to NumPy-nullable / PyArrow dtypes directly there."
] |
3,285,422,535
| 62,021
|
TST: nan->NA in non-construction tests
|
closed
| 2025-08-02T00:45:44
| 2025-08-04T16:53:23
| 2025-08-04T16:52:42
|
https://github.com/pandas-dev/pandas/pull/62021
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62021
|
https://github.com/pandas-dev/pandas/pull/62021
|
jbrockmendel
| 1
|
Significantly trim the diff for PR(s) implementing always-distinguish behavior.
|
[
"Testing",
"Missing-data"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel "
] |
3,284,873,938
| 62,020
|
BUG: Fix is_signed_integer_dtype to handle abstract floating types (GH 62018)
|
open
| 2025-08-01T19:05:51
| 2025-08-22T15:57:45
| null |
https://github.com/pandas-dev/pandas/pull/62020
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62020
|
https://github.com/pandas-dev/pandas/pull/62020
|
Aniketsy
| 3
|
(#GH 62018)
This PR fixes a bug in (is_signed_integer_dtype) where abstarct Numpy floating types would raise a TypeError .
Now, the function returns False for these types, as expected.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
Thankyou !
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Please wait for discussion on the issue as to whether this is worth doing.",
"Hi @jbrockmendel, just following up, if there are any updates on #62018? Happy to adjust the PR based on the discussion.",
"Per my comment on the issue, i dont think anything should be done here. can wait to see if other maintainers have other opinions."
] |
3,284,662,373
| 62,019
|
REF: make copy keyword in recode_for_categories keyword only
|
closed
| 2025-08-01T17:36:37
| 2025-08-04T22:07:49
| 2025-08-04T21:19:49
|
https://github.com/pandas-dev/pandas/pull/62019
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62019
|
https://github.com/pandas-dev/pandas/pull/62019
|
mroeschke
| 1
|
Follows up to https://github.com/pandas-dev/pandas/pull/62000
`recode_for_categories` had a default `copy=True` to copy the passed codes if the codes didn't need re-coding. This PR makes this argument keyword only to make it explicit if the caller wants to copy - to avoid unnecessary copying when blindly using `recode_for_categories`
|
[
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @mroeschke "
] |
3,283,710,176
| 62,018
|
BUG: pd.api.types.is_signed_integer_dtype(np.floating) throws TypeError
|
open
| 2025-08-01T12:11:10
| 2025-08-01T20:49:38
| null |
https://github.com/pandas-dev/pandas/issues/62018
| true
| null | null |
windiana42
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
print(pd.__version__)
print(np.__version__)
pd.api.types.is_signed_integer_dtype(np.floating)
```
### Issue Description
for
pandas version 2.3.1
and numpy version 2.3.2,
The code above throws exception:
Traceback (most recent call last):
File "scratch_40.py", line 6, in <module>
pd.api.types.is_signed_integer_dtype(np.floating)
File ".pixi/envs/default/lib/python3.12/site-packages/pandas/core/dtypes/common.py", line 744, in is_signed_integer_dtype
return _is_dtype_type(
^^^^^^^^^^^^^^^
File ".pixi/envs/default/lib/python3.12/site-packages/pandas/core/dtypes/common.py", line 1467, in _is_dtype_type
return condition(np.dtype(arr_or_dtype).type)
^^^^^^^^^^^^^^^^^^^^^^
TypeError: Converting `np.inexact` or `np.floating` to a dtype not allowed
### Expected Behavior
pd.api.types.is_signed_integer_dtype(np.floating) == False
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"I don't think we should expect this to work. There will never be an object with one of these objects as its dtype."
] |
3,283,447,071
| 62,017
|
BUG: Fix assert_series_equal with check_category_order=False for categoricals with nulls
|
closed
| 2025-08-01T10:39:40
| 2025-08-07T12:20:08
| 2025-08-07T12:20:08
|
https://github.com/pandas-dev/pandas/pull/62017
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62017
|
https://github.com/pandas-dev/pandas/pull/62017
|
anishkarki
| 0
|
- [x] closes #62008 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,282,657,327
| 62,016
|
DOC: Add example for multi-column joins using `merge`
|
open
| 2025-08-01T05:55:22
| 2025-08-01T06:57:07
| null |
https://github.com/pandas-dev/pandas/pull/62016
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62016
|
https://github.com/pandas-dev/pandas/pull/62016
|
thwait
| 0
|
- [x] closes #57722
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The goal of this change is to add an explicit example for how to perform a multi-column join. While the existing Comparison with SQL documentation does mention that `merge` supports multiple columns, it is easy to miss and likely requires more research from the user to implement.
The reason the new examples are placed inside the `INNER JOIN` section is to continue on from the previous existing "`merge()` also offers..." example, and are intended to read as notes about functionality, which can apply to subsequent `JOIN` types.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,281,851,417
| 62,015
|
DOC: Add SSLCertVerificationError warning message for documentation b…
|
closed
| 2025-07-31T22:04:18
| 2025-08-05T16:05:59
| 2025-08-05T16:05:54
|
https://github.com/pandas-dev/pandas/pull/62015
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62015
|
https://github.com/pandas-dev/pandas/pull/62015
|
jeffersbaxter
| 2
|
…uild fail
- [ ] closes #61975
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
`pre-commit run --files doc/source/development/contributing_documentation.rst` PASSED locally
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Here is a screenshot of the changed section in Contributing to the Documentation, if you chose to merge.\r\n\r\n<img width=\"825\" height=\"360\" alt=\"Screenshot 2025-08-02 at 10 34 19 PM\" src=\"https://github.com/user-attachments/assets/7cf8297b-3781-42f6-b705-bb63f8fc90bb\" />\r\n",
"Thanks @jeffersbaxter "
] |
3,281,338,614
| 62,014
|
Fix cbusday calendar Typecheck v2
|
open
| 2025-07-31T18:06:57
| 2025-07-31T18:07:42
| null |
https://github.com/pandas-dev/pandas/pull/62014
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62014
|
https://github.com/pandas-dev/pandas/pull/62014
|
prblydv
| 0
|
- [x] closes #60647
- [x] Tests added and passed
- [x] Code checks passed
- [ ] Type hints added (not applicable here)
- [x] Added an entry to `whatsnew/v2.2.0.rst` under "Bug fixes > Timeseries"
### Initial Bug
Passing an object like `NYSEExchangeCalendar` to `CustomBusinessDay(calendar=...)` failed or behaved unexpectedly and showed new years day as buissnesssday, because the code accepted non-`busdaycalendar` types without raising an error.
### Fix
As Richard Shadrach posted it should raise a typerror i implemented a `TypeError' check and this PR adds a type check in `_get_calendar()` to raise a `TypeError` if the `calendar` argument is not an instance of `np.busdaycalendar`.
### Test
A new test was added in `test_custom_business_day.py` to confirm that a `TypeError` is raised when an invalid calendar object is passed.
I realize the previous version may have sounded AI-generated. This revised PR is fully authored by me, happy to answer any questions or make changes as needed.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,281,125,909
| 62,013
|
BUG: Sql select from database type cast issue
|
open
| 2025-07-31T16:47:49
| 2025-07-31T16:47:49
| null |
https://github.com/pandas-dev/pandas/issues/62013
| true
| null | null |
Closius
| 0
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from pathlib import Path
import tempfile
import shutil
import datetime
import pandas as pd
from sqlalchemy import (
create_engine, MetaData, Table, Column, Integer,
Float, DateTime, select, insert
)
class DB:
def __init__(self, db_file_path: str | Path):
self.engine = create_engine(f"sqlite:///{db_file_path}", echo=False)
self.metadata = MetaData()
self._table = Table(
"main_table", self.metadata,
Column('time', DateTime, unique=True),
Column('a', Float, nullable=False),
Column('b', Float, nullable=False),
Column('c', Integer, nullable=True),
Column('d', Float, nullable=True),
keep_existing=True
)
self.metadata.create_all(self.engine)
def populate(self):
data = [
{"time": datetime.datetime.fromisoformat('2025-07-26T04:11:00Z'), "a": 1.1, "b": 1.2, "c": None, "d": None},
{"time": datetime.datetime.fromisoformat('2025-07-26T05:22:00Z'), "a": 2.1, "b": 2.2, "c": None, "d": None},
{"time": datetime.datetime.fromisoformat('2025-07-26T06:33:00Z'), "a": 3.1, "b": 3.2, "c": None, "d": None},
{"time": datetime.datetime.fromisoformat('2025-07-26T07:44:00Z'), "a": 4.1, "b": 4.2, "c": None, "d": None},
{"time": datetime.datetime.fromisoformat('2025-07-26T08:55:00Z'), "a": 5.1, "b": 5.2, "c": None, "d": None},
]
with self.engine.connect() as conn:
conn.execute(insert(self._table), data)
conn.commit()
def read_records(self):
_select = select(self._table)
with self.engine.connect() as conn:
df = pd.read_sql(_select, conn)
return df
def __del__(self):
self.engine.dispose()
if __name__ == '__main__':
temp_folder = Path(tempfile.mkdtemp(prefix="pandas_bug_"))
db_file_path = temp_folder / "collected_data_binance.db"
print(db_file_path)
if db_file_path.exists():
db_file_path.unlink()
db = DB(db_file_path=db_file_path)
db.populate()
df = db.read_records()
print(df.info())
del(db)
shutil.rmtree(temp_folder)
```
### Issue Description
Db types:
<img width="1307" height="233" alt="Image" src="https://github.com/user-attachments/assets/c9d3a7d6-7209-4190-bcbe-6808f614d659" />
Db data:
<img width="481" height="262" alt="Image" src="https://github.com/user-attachments/assets/bc54eb08-f435-424a-b5a2-bd3f05d48855" />
After reading data using `pd.read_sql(_select, conn)` we see:
<img width="797" height="390" alt="Image" src="https://github.com/user-attachments/assets/b523051d-069d-4e53-9d36-169eb4d10162" />
### Expected Behavior
The columns with NULL should inherit type from column type of DB instead of "object"
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 2cc37625532045f4ac55b27176454bbbc9baf213
python : 3.13.3
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22631
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.0
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.41
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,280,429,368
| 62,012
|
BUG: Raise TypeError for invalid calendar types in CustomBusinessDay (#60647)
|
closed
| 2025-07-31T13:11:29
| 2025-07-31T15:54:55
| 2025-07-31T15:54:54
|
https://github.com/pandas-dev/pandas/pull/62012
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62012
|
https://github.com/pandas-dev/pandas/pull/62012
|
prblydv
| 1
|
- Closes #60647
### Bug Description
Previously, if an invalid `calendar` argument was passed to `CustomBusinessDay` (e.g., a `pandas_market_calendars` object), it was silently ignored. This resulted in potentially incorrect behavior without warning, which could lead to confusion and incorrect results.
### What This Fix Does
- Adds a strict type check in `offsets.pyx` to ensure the `calendar` parameter is either a `numpy.busdaycalendar` or `AbstractHolidayCalendar`.
- If the type is invalid, a `TypeError` is raised with a clear error message.
- This aligns with expected behavior and helps prevent incorrect usage.
### Tests Added
- ✅ New unit test `test_invalid_calendar_raises_typeerror` added to `test_custom_business_day.py` to assert that an invalid calendar raises a `TypeError`.
- ✅ Existing test `test_calendar` was updated to construct a valid `np.busdaycalendar` from `USFederalHolidayCalendar` dates.
- ✅ All 8 tests in this module now pass successfully.
### Why This Matters
Silently ignoring invalid input is dangerous and can introduce subtle bugs. This fix ensures strict input validation and protects downstream consumers from incorrect assumptions.
### Checklist
- [x] Bug fix added and tested
- [x] New test added for reproducibility
- [x] All existing + new tests pass locally via `pytest`
- [x] Clear commit message: `"BUG: Raise TypeError for invalid calendar types in CustomBusinessDay (#60647)"`
- [x] pandas test structure followed
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"We don't accept AI generated pull requests so closing"
] |
3,280,258,859
| 62,011
|
BUG: Fix assert_series_equal for categoricals with nulls and check_category_order=False (#62008)
|
closed
| 2025-07-31T12:22:18
| 2025-08-13T23:22:09
| 2025-08-13T23:22:09
|
https://github.com/pandas-dev/pandas/pull/62011
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62011
|
https://github.com/pandas-dev/pandas/pull/62011
|
Aniketsy
| 2
|
### Description
This PR fixes an issue where `pd.testing.assert_series_equal` fails when comparing Series with categorical values containing NaNs when using `check_category_order=False`.
### Problem
When using `left.categories.take(left.codes)` for comparing category values, null codes (-1) were not handled correctly, causing incorrect comparisons.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi @mroeschke\r\nThis PR fixes an issue where pd.testing.assert_series_equal fails when comparing Series with categorical values containing NaNs when using check_category_order=False.\r\n\r\nI'd really appreciate it if you could take a look and provide feedback .\r\nPlease let me know if anything needs to be improved or clarified.\r\n\r\nThanks!",
"Hi @jorisvandenbossche,\r\nCould you please review this PR and let me know if any changes are needed.\r\nAlso,I want to ask if the is issue already assigned to someone should I continue working on it or leave it to the current assignee?\r\nThanks!"
] |
3,279,734,756
| 62,010
|
DOC: Series and DataFrame.reindex accepts Timedelta as tolerance, which is not documented
|
closed
| 2025-07-31T09:23:06
| 2025-08-05T07:47:38
| 2025-08-05T07:47:38
|
https://github.com/pandas-dev/pandas/issues/62010
| true
| null | null |
cmp0xff
| 2
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html
- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html
### Documentation problem
The following code snippet works:
```py
import pandas as pd
sr = pd.Series([1, 2], pd.to_datetime(["2023-01-01", "2023-01-02"]))
sr.reindex(index=pd.to_datetime(["2023-01-02", "2023-01-03"]), method="ffill", tolerance=pd.Timedelta("1D"))
df = sr.to_frame()
df.reindex(index=pd.to_datetime(["2023-01-02", "2023-01-03"]), method="ffill", tolerance=pd.Timedelta("1D"))
```
but in the documentation, `tolerance` being `Timedelta` is undefined.
### Suggested fix for documentation
Append `Timedelta` in the documentation for `tolerance`.
|
[
"Docs",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"In the docs I see that the value for tolerance can be a scalar so Timedelta falls into that category (see the definition of Scalar in pandas/_typing.py).\nIf anything you could correct the docstring as it says that in the case of a list it needs to match the dtype of the index which is misguiding since the subtraction of two Timestamps is not a Timestamp but a timedelta.\n\nLet me know if I missed something!",
"Thank you @loicdiridollou , I believe it is an issue for the stubs, for which I created pandas-dev/pandas-stubs#1307."
] |
3,279,409,636
| 62,009
|
BUG FIX: pandas.arrays.IntervalArray.overlaps() incorrectly documents that it accepts IntervalArray.
|
open
| 2025-07-31T07:24:16
| 2025-07-31T16:12:39
| null |
https://github.com/pandas-dev/pandas/pull/62009
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62009
|
https://github.com/pandas-dev/pandas/pull/62009
|
khemkaran10
| 0
|
in `pandas.arrays.IntervalArray.overlaps`, it does not support _IntervalArray_ yet. so parameter _other_ should be _Interval_ in the docs.
Reference: https://pandas.pydata.org/docs/reference/api/pandas.arrays.IntervalArray.overlaps.html
- [x] closes #62004
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,279,403,198
| 62,008
|
BUG: `assert_series_equal` broken with `check_category_order=False` for arrays with null values
|
open
| 2025-07-31T07:21:50
| 2025-08-01T07:33:31
| null |
https://github.com/pandas-dev/pandas/issues/62008
| true
| null | null |
fjetter
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
import numpy as np
values = ['B', np.nan, 'D']
categories_left = ['B', 'D']
# Can be any other ordering
categories_right = categories_left[::-1]
left = pd.Series(pd.Categorical(values, categories=categories_left))
right = pd.Series(pd.Categorical(values, categories=categories_right))
assert set(categories_left) == set(categories_right)
pd.testing.assert_series_equal(left, right, check_category_order=False)
```
### Issue Description
```python-traceback
AssertionError: Series category.values are different
Series category.values values are different (33.33333 %)
[left]: Index(['B', 'D', 'D'], dtype='str')
[right]: Index(['B', 'B', 'D'], dtype='str')
```
The issue is caused by this https://github.com/pandas-dev/pandas/blob/d4ae6494f2c4489334be963e1bdc371af7379cd5/pandas/_testing/asserters.py#L498-L499 take which does not take into account null values (regardless of the kind of null)
### Expected Behavior
No `AssertionError`
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : d4ae6494f2c4489334be963e1bdc371af7379cd5
python : 3.12.11
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 3.0.0.dev0+2278.gd4ae6494f2
numpy : 2.4.0.dev0+git20250730.d621a31
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : 8.2.3
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
bottleneck : None
fastparquet : None
fsspec : 2025.5.1
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : 22.0.0.dev19
pyiceberg : None
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pytz : 2025.2
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : 2.0.41
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Confirmed on main. PRs are welcome!\n\nThanks for raising this!",
"The issue here is that for null values, left.codes / right.codes will give -1. if we call take() on this, it will set the np.nan with last value in the array (if allow_fill = False). I think the fix could be to pass allow_fill as True , and fill_value as np.nan.\n```python\nassert_index_equal(\n left.categories.take(left.codes, allow_fill=True, fill_value=np.nan),\n right.categories.take(right.codes, allow_fill=True, fill_value=np.nan),\n obj=f\"{obj}.values\",\n exact=exact,\n)\n```",
"Take"
] |
3,278,821,384
| 62,007
|
DOC: Standardize noncompliant docstrings in pandas/io/html.py (flake8-docstrings) #61944
|
open
| 2025-07-31T01:00:46
| 2025-08-08T01:03:26
| null |
https://github.com/pandas-dev/pandas/pull/62007
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62007
|
https://github.com/pandas-dev/pandas/pull/62007
|
gumus-g
| 0
|
…-docstrings) #61944
This PR addresses docstring violations identified in [#61944](https://github.com/pandas-dev/pandas/issues/61944).
### Changes made:
- `_build_xpath_expr`: fixed D205 and D400
- `_build_doc`: fixed D205, D400, and D401
- `_equals_tag` and `_handle_hidden_tables`: fixed D400
- Verified compliance using `pydocstyle` and `flake8-docstrings`
### Notes:
- Scope limited to validated violations from the issue
- `_remove_whitespace` was not flagged in my environment (may be config-dependent)
- Let me know if you'd like additional functions standardized for consistency
- [ x] closes #61944
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs",
"IO HTML"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,278,728,725
| 62,006
|
BUG: Implement elementwise IntervalArray.overlaps (#62004)
|
closed
| 2025-07-30T23:46:44
| 2025-07-31T11:16:57
| 2025-07-31T11:16:57
|
https://github.com/pandas-dev/pandas/pull/62006
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62006
|
https://github.com/pandas-dev/pandas/pull/62006
|
Aniketsy
| 0
|
This PR : Fixes #62004: IntervalArray.overlaps now supports IntervalArray and IntervalIndex inputs .
Please let me know if there are any improvements. I can make to my approach or fix. I’m happy to incorporate any feedback or suggestions you may have.
Thankyou !
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,278,524,833
| 62,005
|
DOC: documenting pandas.MultIndex.argsort
|
open
| 2025-07-30T21:48:05
| 2025-07-31T03:51:40
| null |
https://github.com/pandas-dev/pandas/pull/62005
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62005
|
https://github.com/pandas-dev/pandas/pull/62005
|
loicdiridollou
| 0
|
- [x] closes #61998
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Docs",
"MultiIndex",
"Sorting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,278,492,369
| 62,004
|
BUG: `IntervalArray.overlaps()` documents that it accepts another `IntervalArray`, but it is not implemented
|
open
| 2025-07-30T21:28:09
| 2025-08-01T14:30:45
| null |
https://github.com/pandas-dev/pandas/issues/62004
| true
| null | null |
Dr-Irv
| 6
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = [(0, 1), (1, 3), (2, 4)]
intervals = pd.arrays.IntervalArray.from_tuples(data)
intervals.overlaps(intervals)
```
### Issue Description
When running the above, pandas reports:
```text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Condadirs\envs\pandasstubs\lib\site-packages\pandas\core\arrays\interval.py", line 1406, in overlaps
raise NotImplementedError
NotImplementedError
```
### Expected Behavior
Either we don't document this functionality, or we implement it (ideally the latter!!)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.10.14
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.26100
machine : AMD64
processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : English_United States.1252
pandas : 2.3.1
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : 1.1
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 6.0.0
matplotlib : 3.10.3
numba : None
numexpr : 2.11.0
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 21.0.0
pyreadstat : 1.3.0
pytest : 8.4.1
python-calamine : None
pyxlsb : 1.0.10
s3fs : None
scipy : 1.15.3
sqlalchemy : 2.0.41
tables : 3.10.1
tabulate : 0.9.0
xarray : 2025.6.1
xlrd : 2.0.2
xlsxwriter : 3.2.5
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Docs",
"Interval"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Some discussion here:\n\nhttps://github.com/pandas-dev/pandas/pull/22939#discussion_r227746448\n\nMostly pointing at #18975. With this, I would recommend making this on fixing the docstring for now and we can discuss implementing in the future if desired. \n\nFor this incorrect docstring, the error was introduced in #26316.",
"I should say that I was trying to use this functionality in an application, so it would be good if it worked!",
"Makes sense, no opposition here. A cursory read of the linked issues indicated there was quite some debate on how that should behave. But that was 7 years ago, I think a fresh proposal for the API could be less contentious now.",
"I think the docs somewhat suggested the right API at https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.arrays.IntervalArray.overlaps.html#pandas.arrays.IntervalArray.overlaps which says \"Check elementwise if an Interval overlaps the values in the IntervalArray.\" \n\nAlthough that description is ambiguous.\n\nSo I'd vote for doing elementwise overlaps - that's what I needed. If someone has 2 arrays and wants to compare all the intervals, you do a `cross` join and then call overlaps.\n",
"@Dr-Irv this is what we are expecting right?\n\n```python\na = IntervalArray.from_tuples([(1, 2), (3, 4), (4, 5)])\nb = IntervalArray.from_tuples([(4, 5), (1, 2)])\n\na.overlaps(b)\n\narray([\n [False, False, True], \n [True, False, False]\n]) \n```",
"> [@Dr-Irv](https://github.com/Dr-Irv) this is what we are expecting right?\n> \n> a = IntervalArray.from_tuples([(1, 2), (3, 4), (4, 5)])\n> b = IntervalArray.from_tuples([(4, 5), (1, 2)])\n> \n> a.overlaps(b)\n> \n> array([\n> [False, False, True], \n> [True, False, False]\n> ])\n\nNo. If the arrays are of different length, I would expect an exception to be raised.\n\nI just want it to be pairwise.\n\nIf you want to do something like the above, then the following is what I would propose for that use case\n\n```python\ncross = pd.merge(pd.Series(pd.arrays.IntervalArray.from_tuples([(1, 2), (3, 4), (4, 5)]),name=\"a\"), \n pd.Series(pd.arrays.IntervalArray.from_tuples([(4, 5), (1, 2)]), name=\"b\"), how=\"cross\")\ncross.assign(result=IntervalArray(cross[\"a\"]).overlaps(IntervalArray(cross[\"b\"]).set_index([\"a\", \"b\"]).unstack(sort=False).T.values\n```\nI think the above would give the same result, although it is a bit awkward.\n\nFrom my understanding the debate in #18975 was whether the operation should be a cross operation or an element-by-element one. One option to avoid that is to have an argument to `overlaps()` that indicates whether it should be by element or crosswise.\n"
] |
3,278,195,740
| 62,003
|
Fix for issue 62001; ENH: Context-aware error messages for optional dependencies
|
closed
| 2025-07-30T19:18:29
| 2025-07-31T00:36:14
| 2025-07-30T20:49:11
|
https://github.com/pandas-dev/pandas/pull/62003
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62003
|
https://github.com/pandas-dev/pandas/pull/62003
|
wilocu
| 2
|
#62001
Summary
This PR enhances import_optional_dependency() to provide context-aware error messages that suggest relevant alternatives when
dependencies are missing, addressing issue #62001.
Before:
Missing optional dependency 'openpyxl'. Use pip or conda to install openpyxl.
After:
Missing optional dependency 'openpyxl'. For Excel file operations, try installing xlsxwriter, calamine, xlrd, pyxlsb, or odfpy.
Use pip or conda to install openpyxl.
Implementation Details
- Core Enhancement: Added operation_context parameter to import_optional_dependency() with 13 predefined contexts (excel,
plotting, html, xml, sql, performance, compression, cloud, formats, computation, timezone, testing, development)
- Smart Alternative Filtering: Excludes the failed dependency from suggestions to avoid confusion
- Backward Compatibility: All existing calls work unchanged; new parameter is optional
- Strategic Implementation: Updated high-impact locations where users commonly encounter missing dependencies:
- Excel operations (5 readers: openpyxl, xlrd, calamine, pyxlsb, odf)
- Plotting operations (matplotlib)
- HTML parsing operations (html5lib)
Files Modified
1. pandas/compat/_optional.py: Core enhancement with context mapping and message building
2. pandas/tests/test_optional_dependency.py: Updated test patterns and added comprehensive context tests
3. pandas/io/excel/_*.py: Added context to 5 Excel readers
4. pandas/plotting/_core.py: Added plotting context
5. pandas/io/html.py: Added HTML parsing context
6. doc/source/whatsnew/v3.0.0.rst: Added whatsnew entry
Testing
- All existing tests pass with updated patterns
- New tests verify context functionality works correctly
- Manual verification confirms all files compile successfully
- Backward compatibility maintained for existing calls
- [x] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Please make sure to double check I am still new to contributions and let me know if there are any mistakes.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Also I suspect this PR was AI generated so closing. We discourage heavily AI generated pull requests",
"It partially was, I appreciate the review, it makes sense, thank you"
] |
3,277,510,423
| 62,002
|
DOC: Simplify footer text in pandas documentation
|
closed
| 2025-07-30T15:17:42
| 2025-07-30T15:50:08
| 2025-07-30T15:50:08
|
https://github.com/pandas-dev/pandas/pull/62002
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62002
|
https://github.com/pandas-dev/pandas/pull/62002
|
revanthpuvanes
| 1
|
This PR simplifies the documentation footer template for clarity.
(Note: This is unrelated to issue #60647, which is about CustomBusinessDay.)
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is fine as is, closing"
] |
3,277,191,856
| 62,001
|
ENH: error messages for missing optional dependencies should point out the options
|
open
| 2025-07-30T13:59:14
| 2025-08-14T13:40:29
| null |
https://github.com/pandas-dev/pandas/issues/62001
| true
| null | null |
joooeey
| 8
|
### Feature Type
- [ ] Adding new functionality to pandas
- [x] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
When using a functionality that requires a ~~performance dependency~~ [optional dependency](https://pandas.pydata.org/docs/getting_started/install.html#optional-dependencies) that is not installed, the error message points out a specific library instead of the multiple options that the user has.
See the following report from an [earlier, very similar, already closed issue](https://github.com/pandas-dev/pandas/issues/58246):
> Hi, I have just started learning about pandas with "Getting Started tutorials". On the "How do I read and write tabular data?" tutorial when I ran the command got an unexpected error and no suggestion was provided there to solve this issue. The command and error are as follows:
>
> Command: `titanic.to_excel('titanic.xlsx', sheet_name='Passengers', index=False)`
>
> Error: `ModuleNotFoundError: No module named 'openpyxl'`
>
> I solved the issue by installing `openpyxl` using pip with `pip install openpyxl`.
>
> [...]
### Feature Description
The error message should be changed to something along the lines of:
<s>
Missing optional dependency. To use this functionality, you need to install
xlrd, xlsxwriter, openpyxxl, pyxlsb or python-calamine.
</s>
```
Missing optional dependency. To use this functionality, you need to install xlswriter or openpyxl.
```
Similar error messages should be emitted when trying to use any of the other ~~performance~~ optional dependencies (plots, computation, HTML, XML, SQL, etc.)
### Alternative Solutions
If you are good at searching the web or know Pandas well, you can figure out that you have multiple options, otherwise you just install the module mentioned in the current error message.
### Additional Context
_No response_
|
[
"Enhancement",
"Error Reporting",
"Closing Candidate"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"take",
"> When using a functionality that requires a [performance dependency](https://pandas.pydata.org/docs/getting_started/install.html#performance-dependencies-recommended) that is not installed, the error message points out a specific library instead of the multiple options that the user has.\n\nCan you give a reproducer here? Is it really the case that installing any one of them will resolve the issue?",
"> Can you give a reproducer here? Is it really the case that installing any one of them will resolve the issue?\n\nPer the docs linked already twice in this thread, `to_excel` should only work with openpyxl and xlswriter. I did `mamba install xlswriter`.\n\nI'm not sure if it's best to link to the documentation or list the options in each case. If we list the options, it would be good to pull the options dynamically from the surrounding code. I'll edit my question and scratch the too-long list of options.",
"> Per the docs linked already twice in this thread, `to_excel` should only work with openpyxl and xlswriter. I did `mamba install xlswriter`.\n\nI see no links to documentation on Excel in this thread, nor is this statement correct: `odf` can also be used as a writer. In any case, I do not see what this has to do with my request for a reproducer of the performance dependency error.",
"> I see no links to documentation on Excel in this thread, nor is this statement correct: `odf` can also be used as a writer. In any case, I do not see what this has to do with my request for a reproducer of the performance dependency error.\n\n[This is the section about excel](https://pandas.pydata.org/docs/getting_started/install.html#excel-files) in the Pandas performance dependency docs.\n\nWhat do you mean with \"odf\"? There is no Python library with that name that writes tables.\n\nThe reproducer of the error has been in the issue all along. Here's a more focused reproducer:\n```\n$ mamba create -n test pandas\n$ mamba activate test\n$ python\n>>> import pandas as pd\n>>> pd.DataFrame([[1, 2, 3]]).to_excel(\"test.xlsx\")\nTraceback (most recent call last):\n File \"<python-input-1>\", line 1, in <module>\n pd.DataFrame([[1, 2, 3]]).to_excel(\"test.xlsx\")\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^\n File \"/home/lukas/miniforge3/envs/test/lib/python3.13/site-packages/pandas/util/_decorators.py\", line 333, in wrapper\n return func(*args, **kwargs)\n File \"/home/lukas/miniforge3/envs/test/lib/python3.13/site-packages/pandas/core/generic.py\", line 2436, in to_excel\n formatter.write(\n ~~~~~~~~~~~~~~~^\n excel_writer,\n ^^^^^^^^^^^^^\n ...<6 lines>...\n engine_kwargs=engine_kwargs,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/home/lukas/miniforge3/envs/test/lib/python3.13/site-packages/pandas/io/formats/excel.py\", line 943, in write\n writer = ExcelWriter(\n writer,\n ...<2 lines>...\n engine_kwargs=engine_kwargs,\n )\n File \"/home/lukas/miniforge3/envs/test/lib/python3.13/site-packages/pandas/io/excel/_openpyxl.py\", line 57, in __init__\n from openpyxl.workbook import Workbook\nModuleNotFoundError: No module named 'openpyxl'\n>>> quit()\n$ mamba install xlsxwriter\n$ python\n>>> import pandas as pd\n>>> pd.DataFrame([[1, 2, 3]]).to_excel(\"test.xlsx\")\n>>>\n```\n\nIt succeeds after install xlsxwriter.\n\n",
"Ahhh, I think I finally understand the confusion! The docs you have been linking to are `Optional dependencies`. In that, there are two subsections (among others): `Performance dependencies` and `Excel files`. These are separate subsections. I believe you mean this issue to be about the Excel file dependencies (e.g. openpyxl, xlsxwriter) and not performance dependencies (e.g. numba, bottleneck).\n\n> What do you mean with \"odf\"? There is no Python library with that name that writes tables.\n\n```python\ndf = pd.DataFrame({\"a\": [1, 1, 2], \"b\": [3, 4, 5]})\ndf.to_excel(\"test.ods\", engine=\"odf\")\n```\n\nI am positive for changing the error message here to be more general as long as (a) it is accurate for the file type specified by the user and engine if provided and (b) does not involve introducing more metadata pandas must maintain (e.g. a list of engines for each file type, above and beyond what we already have). If this is not possible, I think the error message is okay as-is.\n\nLikewise, this applies to other types of optional dependencies as well.\n",
"> Ahhh, I think I finally understand the confusion! The docs you have been linking to are `Optional dependencies`. In that, there are two subsections (among others): `Performance dependencies` and `Excel files`. These are separate subsections. I believe you mean this issue to be about the Excel file dependencies (e.g. openpyxl, xlsxwriter) and not performance dependencies (e.g. numba, bottleneck).\n\nMy issue was meant to be about all `Optional dependencies`. However, I only have a reproducer for Excel. I assume this applies to the others as well but haven't checked. I'll edit the OP accordingly.\n\n",
"> I assume this applies to the others as well but haven't checked.\n\nPerhaps in some, but certainly not all. E.g. bottleneck and numexpr are not used for the same things. Also even in the Excel case, if a user does `df.to_excel(..., enging=\"openpyxl\")`, I do not think we should suggest installing xlsxwriter.\n\nI'd be okay with looking to improve the error message here, but only if it doesn't add significant complexities to the code. Otherwise, I think this is okay as-is."
] |
3,276,652,040
| 62,000
|
BUG: Avoid copying categorical codes if `copy=False`
|
closed
| 2025-07-30T11:24:54
| 2025-08-04T07:05:42
| 2025-08-01T16:58:40
|
https://github.com/pandas-dev/pandas/pull/62000
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/62000
|
https://github.com/pandas-dev/pandas/pull/62000
|
fjetter
| 1
|
Categorical codes are always copied by `recode_for_categories` regardless of the copy argument. This fixes it by passing the copy argument down to `recode_for_categories`
- ~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Categorical"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @fjetter "
] |
3,275,421,046
| 61,999
|
Maddie doc simplify footer theme
|
closed
| 2025-07-30T01:42:49
| 2025-07-30T01:44:16
| 2025-07-30T01:43:15
|
https://github.com/pandas-dev/pandas/pull/61999
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61999
|
https://github.com/pandas-dev/pandas/pull/61999
|
maddiew95
| 0
|
Not using _template, all in conf.py
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,275,269,523
| 61,998
|
DOC: documenting pandas.MultIndex.argsort
|
open
| 2025-07-29T23:49:59
| 2025-07-30T02:42:36
| null |
https://github.com/pandas-dev/pandas/issues/61998
| true
| null | null |
loicdiridollou
| 3
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
No docs yet for `pd.MultiIndex.argsort`.
Only for `pd.Index`: https://pandas.pydata.org/docs/dev/reference/api/pandas.Index.argsort.html
### Documentation problem
Currently `pd.MultiIndex.argsort` is not documented, only `pd.Index` is.
Considering that the function signature is different (MultiIndex version has a `na_position` argument between `*args` and `**kwargs`.
Is this something we intend to document in the future or is it something that is not recommended for use by the users?
Thanks!
### Suggested fix for documentation
Adding docs seems the best option forward.
|
[
"Docs",
"MultiIndex",
"Sorting"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. Agreed we should include this in the API docs. A PR to add this would be welcome.",
"Thanks for the feedback, I will address that.",
"take"
] |
3,275,089,979
| 61,997
|
DOC: add button to edit on GitHub
|
closed
| 2025-07-29T21:57:48
| 2025-07-29T23:59:41
| 2025-07-29T23:59:40
|
https://github.com/pandas-dev/pandas/pull/61997
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61997
|
https://github.com/pandas-dev/pandas/pull/61997
|
DoNguyenHung
| 3
|
- [x] closes #39859
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Changes made:
- Added an extension that allows to have a sidebar with extra "Show on GitHub" and "Edit on GitHub" links. Found [here](https://mg.pov.lt/blog/sphinx-edit-on-github.html).
- Modified conf.py to make sure the extension is added and links direct to editable pages.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Hi [mroeschke](https://github.com/mroeschke), would you mind taking a look at this? My fix should have all the links direct-able and I added an extension so it resembles the issue description #39859. This is my first issue so please let me know if there's any convention I'm missing. Also, I said in the issue that I was working on this, but I had a ton of issues building pandas for the first time, so that's why my update is delayed. Thanks in advance!",
"pre-commit.ci autofix",
"Thanks for the PR, but someone is already working on this in https://github.com/pandas-dev/pandas/pull/61956 so closing to let them have a change to finish. But happy to have contributions labeled `good first issue` that doesn't have a linked PR open"
] |
3,274,527,166
| 61,996
|
TST: Raise on `pytest.PytestWarning`
|
closed
| 2025-07-29T18:08:43
| 2025-07-30T16:08:47
| 2025-07-30T16:08:44
|
https://github.com/pandas-dev/pandas/pull/61996
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61996
|
https://github.com/pandas-dev/pandas/pull/61996
|
mroeschke
| 0
|
Just to make the pytest warning summary a bit shorter
|
[
"Testing"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,274,486,584
| 61,995
|
BUG/DEPR: logical operation with bool and string
|
closed
| 2025-07-29T17:55:19
| 2025-08-15T07:12:34
| 2025-07-29T20:52:04
|
https://github.com/pandas-dev/pandas/pull/61995
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61995
|
https://github.com/pandas-dev/pandas/pull/61995
|
jbrockmendel
| 3
|
- [x] closes #60234 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Numeric Operations",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @jbrockmendel ",
"Owee, I'm MrMeeseeks, Look at me.\n\nThere seem to be a conflict, please backport manually. Here are approximate instructions:\n\n1. Checkout backport branch and update it.\n\n```\ngit checkout 2.3.x\ngit pull\n```\n\n2. Cherry pick the first parent branch of the this PR on top of the older branch:\n```\ngit cherry-pick -x -m1 36b8f20e06d3a322890173e6f520ed108825ea02\n```\n\n3. You will likely have some merge/cherry-pick conflict here, fix them and commit:\n\n```\ngit commit -am 'Backport PR #61995: BUG/DEPR: logical operation with bool and string'\n```\n\n4. Push to a named branch:\n\n```\ngit push YOURFORK 2.3.x:auto-backport-of-pr-61995-on-2.3.x\n```\n\n5. Create a PR against branch 2.3.x, I would have named this PR:\n\n> \"Backport PR #61995 on branch 2.3.x (BUG/DEPR: logical operation with bool and string)\"\n\nAnd apply the correct labels and milestones.\n\nCongratulations — you did some good work! Hopefully your backport PR will be tested by the continuous integration and merged soon!\n\nRemember to remove the `Still Needs Manual Backport` label once the PR gets merged.\n\nIf these instructions are inaccurate, feel free to [suggest an improvement](https://github.com/MeeseeksBox/MeeseeksDev).\n ",
"Manual backport -> https://github.com/pandas-dev/pandas/pull/62114"
] |
3,274,048,510
| 61,994
|
PERF: `pandas.DataFrame.stack` with `future_stack=True`
|
closed
| 2025-07-29T15:20:05
| 2025-07-29T16:43:23
| 2025-07-29T16:43:23
|
https://github.com/pandas-dev/pandas/issues/61994
| true
| null | null |
thedimlebowski
| 1
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this issue exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this issue exists on the main branch of pandas.
### Reproducible Example
```
import numpy as np
df = pd.DataFrame(np.random.randn(100, 100))
%timeit df.stack(future_stack=False)
%timeit df.stack(future_stack=True)
```
```
242 μs ± 40.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
25.6 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.11.13
python-bits : 64
OS : Linux
OS-release : 4.18.0-553.36.1.el8_10.x86_64
Version : #1 SMP Wed Jan 22 03:07:54 EST 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 2.2.6
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.1.1
Cython : None
sphinx : None
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.4
blosc : None
bottleneck : 1.5.0
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.7.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : 3.10.3
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 20.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : 2025.7.0
scipy : 1.14.1
sqlalchemy : 2.0.41
tables : None
tabulate : 0.9.0
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
### Prior Performance
_No response_
|
[
"Performance",
"Needs Triage"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks, the performance issue should be addressed once 3.0 comes out xref https://github.com/pandas-dev/pandas/pull/58817"
] |
3,273,860,886
| 61,993
|
BUG: Inconsistent `datetime` dtype based on how the dataframe gets initialized
|
closed
| 2025-07-29T14:24:21
| 2025-08-01T14:13:28
| 2025-07-31T15:04:51
|
https://github.com/pandas-dev/pandas/issues/61993
| true
| null | null |
cosmic-heart
| 5
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
(Pdb) pd.DataFrame({"0": [datetime.fromtimestamp(1568888888, tz=pytz.utc)]}).dtypes
0 datetime64[ns, UTC]
dtype: object
(Pdb) pd.DataFrame({"0": datetime.fromtimestamp(1568888888, tz=pytz.utc)}, index=[0]).dtypes
0 datetime64[us, UTC]
dtype: object
(Pdb)
```
### Issue Description
When creating a Pandas DataFrame with a timezone-aware datetime object (e.g., datetime.datetime with tzinfo=pytz.UTC), the inferred datetime64 precision differs depending on whether the datetime is passed as a scalar or inside a list. This leads to inconsistent and potentially unexpected behavior
### Expected Behavior
Both DataFrame initializations should infer the same datetime dtype (datetime64[ns, UTC]), ideally following Pandas’ default precision of nanoseconds.
### Installed Versions
<details>
>>> pd.show_versions()
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.13.5
python-bits : 64
OS : Linux
OS-release : 6.8.0-47-generic
Version : #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 22:03:50 UTC 2024
machine : aarch64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : C.UTF-8
pandas : 2.3.1
numpy : 2.3.2
pytz : 2025.2
dateutil : 2.9.0.post0
pip : None
Cython : None
sphinx : 8.2.3
IPython : 9.4.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : 2.9.10
pymysql : None
pyarrow : 21.0.0
pyreadstat : None
pytest : 8.4.1
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.16.0
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
>>>
</details>
|
[
"Bug",
"Datetime",
"Needs Info"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"These both correctly give microsecond dtype on main. Can you confirm",
"> These both correctly give microsecond dtype on main. Can you confirm\n\nYes, I’m getting microseconds for both initializations from the main branch. However, I noticed that the main branch has been tagged as 3.0.0-dev branch. Will this fix also be backported to any upcoming 2.x releases?\n\nAdditionally, shouldn’t the default datetime object be in nanoseconds, as it was in pandas 1.x?",
"> Will this fix also be backported to any upcoming 2.x releases?\n\nNo, this is not a \"fix\" but an API change in 3.0 to do resolution inference in the non-scalar case.\n\n> Additionally, shouldn’t the default datetime object be in nanoseconds, as it was in pandas 1.x?\n\nNo, we do resolution inference based on the input. In this case the input is a python datetime object which has microsecond resolution.",
"But in version 2.3.1, the same datetime object behaves differently depending on how it’s initialized, when created as an array, it retains nanosecond precision, whereas initializing it with index=[0] results in microsecond precision. Doesn’t that seem like a bug?",
"We definitely want it to behave the same, which is why we implemented resolution inference for sequences for 3.0. But backporting that is not viable, and everything is behaving as expected/documented in 2.3.1."
] |
3,272,856,813
| 61,992
|
DOC: Point out difference in usage of "str" dtype in constructor and astype member
|
closed
| 2025-07-29T09:24:33
| 2025-08-20T02:40:09
| 2025-08-20T02:40:09
|
https://github.com/pandas-dev/pandas/issues/61992
| true
| null | null |
cbourjau
| 3
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
This concerns the 3.0 migration guide: https://pandas.pydata.org/docs/user_guide/migration-3-strings.html)
### Documentation problem
The string migration [guide](https://pandas.pydata.org/docs/user_guide/migration-3-strings.html#the-missing-value-sentinel-is-now-always-nan) suggests using `"str"` in place of `"object"` to write compatible code. The example only showcases this suggestion for the Series constructor, where it indeed works as intended (Pandas 2.3.0):
```python
>>> import pandas as pd
>>> pd.Series(["a", None, np.nan, pd.NA], dtype="str").array
<NumpyExtensionArray>
['a', None, nan, <NA>]
Length: 4, dtype: object
```
However, the semantics of using `"str"` are different if the series has already been initialized with an `"object"` dtype and the user calls `astype("str")` on it:
```python
>>> series = pd.Series(["a", None, np.nan, pd.NA])
>>> series.array
<NumpyExtensionArray>
['a', None, nan, <NA>]
Length: 4, dtype: object
>>> series.astype("str").array
<NumpyExtensionArray>
['a', 'None', 'nan', '<NA>']
Length: 4, dtype: object
```
Note that all values have been cast to strings. In fact, this behavior appears to be the behavior of passing the literal `str` as the data type that is mentioned later in the bug-fix [section](https://pandas.pydata.org/docs/user_guide/migration-3-strings.html#astype-str-preserving-missing-values).
### Suggested fix for documentation
I believe this subtle difference should be pointed out in the migration guide. Ideally, a suggestion should be made on how one may write 3.0-compatible code using `astype`. In my case, the current Pandas 2 code is casting a categorical column (with string categories) into an object column, but I'd like to write code such that this operation becomes a string column in Pandas 3.
|
[
"Docs",
"Strings"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. Agreed this difference should be highlighted. With `infer_string` being set to True, these now give `['a', nan, nan, nan]`. I'm thinking this should be added to the `astype(str)` section and not be called a bugfix.\n\ncc @jorisvandenbossche ",
"Good point @cbourjau! \n\n> In my case, the current Pandas 2 code is casting a categorical column (with string categories) into an object column, but I'd like to write code such that this operation becomes a string column in Pandas 3.\n\nSo you currently have something like:\n\n```\n>>> pd.__version__\n'2.3.1'\n>>> ser = pd.Series([\"a\", \"b\", \"a\", None], dtype=\"category\")\n>>> ser.astype(\"object\").values\narray(['a', 'b', 'a', nan], dtype=object)\n```\n\nand then the question is how to write that such that it stays object dtype in 2.3 and becomes string dtype in 3.0. \nAnd indeed doing `astype(\"str\")` does not work as desired because of that \"bug\" of also stringifying missing values:\n\n```\n>>> ser.astype(str).values\narray(['a', 'b', 'a', 'nan'], dtype=object)\n>>> ser.astype(\"str\").values\narray(['a', 'b', 'a', 'nan'], dtype=object)\n```\n\nSomehow I thought that this was only the case of `str` and not `\"str\"` ... (given that I wrote exactly that in the migration guide in the section about the astype bug: _\"when using astype(str) (using the built-in str, not astype(\"str\")!)\"_, so that section is clearly wrong)\n\nIn that case I don't think there is another alternative than some conditional behaviour depending on the version, like:\n\n```python\nser.astype(\"str\" if pd.__version__ > \"3\" else \"object\").values\n```",
"I opened a PR to rewrite the section about `astype(str)`: https://github.com/pandas-dev/pandas/pull/62147. Feedback very welcome!"
] |
3,272,756,271
| 61,991
|
BUG: Python Package fails to load for some users, but not others.
|
closed
| 2025-07-29T08:57:34
| 2025-07-30T09:44:21
| 2025-07-30T08:19:21
|
https://github.com/pandas-dev/pandas/issues/61991
| true
| null | null |
ialvata
| 3
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
# Code
import pandas as pd
df = pd.DataFrame({"Name":["Braund"]})
```
### Issue Description
# Venv
The venv is owned by root:root with 755 permissions.
Pandas version 2.3.1 (but also happens with 2.2.3)
# Command
/opt/.venv/bin/python /home/user.name/python_scripts/sketches.py
# Traceback Message
Traceback (most recent call last):
File "/home/user.name/python_scripts/sketches.py", line 7, in <module>
df = pandas.DataFrame(
AttributeError: module 'pandas' has no attribute 'DataFrame'
Note: In fact, regardless of the method used, it seems to always output the same error message. I have used <user.name> to work with other packages in the same environment without any problem. However, if I use root user, then all the scripts I've tried with pandas work as expected.
### Expected Behavior
No error message, and creation of a data frame.
### Installed Versions
-> Replace this line with the output of pd.show_versions()
Using root privileges,
INSTALLED VERSIONS
------------------
commit : c888af6d0bb674932007623c0867e1fbd4bdc2c6
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.0-142-generic
Version : #152-Ubuntu SMP Mon May 19 10:54:31 UTC 2025
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.3.1
numpy : 1.26.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 22.0.2
Cython : None
sphinx : 8.1.3
IPython : 8.35.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.13.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.3.2
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : 5.3.2
matplotlib : 3.10.1
numba : 0.61.2
numexpr : None
odfpy : None
openpyxl : 3.1.5
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.1
pyreadstat : None
pytest : 8.3.5
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.13.1
sqlalchemy : 2.0.40
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
|
[
"Bug",
"Needs Info"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"This is weird. The pandas import works but not pandas.DataFrame? What do you get from `dir(pandas)`?",
"Just noting that is very likely not a pandas issue. I would suggest.\n\n1. Recreate your virtual environment with the packages you need (sharing these steps would be helpful to help diagnose)\n2. Making sure you have no other files named `pandas.py` that your script is importing",
"With a regular user:\n\n## Print Output\n/opt/torch_venv/bin/python /home/ivo.tavares/python_scripts/experiments.py\n\ndir(pandas) = ['__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__']\n\npd.__path__ = _NamespacePath(['/opt/torch_venv/lib/python3.10/site-packages/pandas'])\n \n## Possible Cause\nI did an strace to the process for running the exact same script. The regular user gets permission denied when loading pandas. \nIt seems to be related to the environmental variable: \nLD_LIBRARY_PATH and the order of the paths therein...\n\n"
] |
3,272,100,794
| 61,990
|
BUG: Fix ExtensionArray binary op protocol
|
closed
| 2025-07-29T05:07:31
| 2025-08-14T22:39:07
| 2025-08-14T22:39:01
|
https://github.com/pandas-dev/pandas/pull/61990
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61990
|
https://github.com/pandas-dev/pandas/pull/61990
|
tisjayy
| 11
|
- [x] closes #61866
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- Updated pandas/core/arrays/boolean.py to return NotImplemented in binary operations where appropriate, following Python's operator protocol.
- Added and updated tests to ensure correct error handling and array interaction behavior.
|
[
"Numeric Operations",
"ExtensionArray"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"pre-commit.ci autofix",
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"@jbrockmendel can you please review at your convenience? thanks!",
"I'm out of town for a few days, will look at this when i get back.",
"pre-commit.ci autofix\r\n\r\n",
"@jbrockmendel changes you requested have been made. please take a look",
"Hi @mroeschke , I think this pr is ready, would you mind reviewing it if you get a chance?",
"thanks @tisjayy "
] |
3,272,056,352
| 61,989
|
ENH: Add engine='polars' support in read_csv
|
closed
| 2025-07-29T04:49:16
| 2025-07-29T05:10:40
| 2025-07-29T05:10:40
|
https://github.com/pandas-dev/pandas/pull/61989
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61989
|
https://github.com/pandas-dev/pandas/pull/61989
|
abujabarmubarak
| 0
|
### 🚀 Enhancement: Add `engine='polars'` Support in `read_csv`
#### 🔧 Summary of Changes
This PR introduces support for using **[[Polars](https://pola-rs.github.io/polars/py-polars/html/reference/api/pl.read_csv.html)](https://pola-rs.github.io/polars/py-polars/html/reference/api/pl.read_csv.html)** as a backend CSV parsing engine in `pandas.read_csv`, providing faster parsing capabilities for large files.
The following changes are included:
* ✅ **Added support for** `engine="polars"` in `pandas.read_csv`
* ✅ **Dynamically imported** Polars and handled `ImportError` gracefully
* ✅ **Filtered** `read_csv()` kwargs to only allow those compatible with Polars
* ✅ **Converted** `Path` input to string (Polars does not accept path-like objects in all versions)
* ✅ **Added test case** `test_read_csv_with_polars` under `tests/io/parser`
* ✅ **Updated version** to `2.3.3.dev0` in `__init__.py` and `pyproject.toml` (as part of the development build)
* ✅ **Resolved all `ruff` linter errors and pre-commit hook failures** (e.g., B904, E501, F841, SC1017)
* ✅ **Formatted shell scripts** using `dos2unix` to fix line-ending issues across:
* `ci/code_checks.sh`
* `ci/run_tests.sh`
* `scripts/cibw_before_build.sh`
* `scripts/download_wheels.sh`
* `scripts/upload_wheels.sh`
* `gitpod/workspace_config`
---
#### 📆 Usage Example
```python
import pandas as pd
df = pd.read_csv("sample.csv", engine="polars")
print(df)
```
##### ✅ Expected Output:
```
a b
0 1 2
1 3 4
```
---
#### 💡 Why This Matters
Polars is a high-performance DataFrame library designed for speed and multi-threaded performance. Adding it as a supported backend:
* Provides **significant performance boosts** for CSV reading
* Enhances **flexibility** for end-users to choose engines (like `c`, `python`, or `polars`)
* Keeps Pandas future-ready with **optional modular parsing backends**
---
#### ✅ Tests & Quality Checks
* 🔪 Unit test added: `test_read_csv_with_polars`
* ✅ Passed: All pytest tests
* ✅ Passed: All pre-commit hooks
* ✅ Passed: `ruff`, `shellcheck`, `cython-lint`, `codespell`, etc.
* ↺ Converted scripts to LF line endings using `dos2unix` for consistent CI/CD compatibility
---
#### 🧠 Notes
* `polars` is treated as an **optional dependency**
* If not installed, Pandas will raise a clear error:
*“Polars is not installed. Please install it with 'pip install polars'.”*
---
#### 🙌 Acknowledgements
Thanks to the maintainers for reviewing this contribution!
Looking forward to feedback or further improvements.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,271,847,472
| 61,988
|
[ENH] Add `polars` Engine Support to `pd.read_csv()`
|
closed
| 2025-07-29T03:14:05
| 2025-07-29T03:43:50
| 2025-07-29T03:43:50
|
https://github.com/pandas-dev/pandas/pull/61988
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61988
|
https://github.com/pandas-dev/pandas/pull/61988
|
abujabarmubarak
| 0
|
### 🚀 Pull Request: [ENH] Add `polars` Engine Support to `pd.read_csv()`
---
### ❓ Problem Statement
Pandas' `read_csv()` function supports multiple engines like "c", "python", and "pyarrow" for reading CSV files. However, there is **no built-in support for the high-performance [Polars](https://pola-rs.github.io/polars-book/) engine**, which is known for its speed and efficiency in parsing large datasets.
✅ **Community Request**: Feature proposed in [Issue #61813](https://github.com/pandas-dev/pandas/issues/61813)
---
### 🛠️ Solution & What’s Included
This PR implements optional support for `engine="polars"` in `pandas.read_csv()` by:
1. **Modifying `readers.py`**:
- Checks if engine is "polars".
- Dynamically imports Polars and uses `pl.read_csv(...).to_pandas()` to return a pandas DataFrame.
```python
if kwds.get("engine") == "polars":
try:
import polars as pl # type: ignore[import-untyped]
except ImportError:
raise ImportError("Polars is not installed. Please install it with 'pip install polars'.")
df = pl.read_csv(filepath_or_buffer, **kwds).to_pandas()
return df
```
2. **Ensuring compatibility in engine validation**:
```python
if engine not in ("c", "python", "pyarrow", "polars"):
raise ValueError(f"Unknown engine: {engine}")
```
3. **Version Updates**:
- Updated version to `2.3.3.dev0` in:
- `__init__.py`
- `pyproject.toml`
4. **Testing**:
- Added a dedicated test: `pandas/tests/io/parser/test_read_csv_polars.py`
---
### 💡 Example Usage
```python
import pandas as pd
df = pd.read_csv("sample.csv", engine="polars")
print(df)
```
**Input file: `sample.csv`**
```
a,b
1,2
3,4
```
---
### 🎯 Expected Output
```
a b
0 1 2
1 3 4
```
- The file is parsed using Polars under the hood and returned as a `pandas.DataFrame`.
- Performance benefits without changing the Pandas API.
- Optional: only active if `polars` is installed.
---
### 📂 Files Modified
- `pandas/io/parsers/readers.py` → Add polars engine logic
- `pandas/__init__.py` → Version bump to `2.3.3.dev0`
- `pyproject.toml` → Version update
- `pandas/tests/io/parser/test_read_csv_polars.py` → New test file added
---
### 🧪 Tests
**Test name**: `test_read_csv_with_polars`
```python
def test_read_csv_with_polars(tmp_path):
pl = pytest.importorskip("polars")
pd = pytest.importorskip("pandas")
file = tmp_path / "sample.csv"
file.write_text("a,b\n1,2\n3,4")
df = pd.read_csv(file, engine="polars")
assert df.equals(pd.DataFrame({"a": [1, 3], "b": [2, 4]}))
```
✅ Result: **Passed with warning** (unrelated deprecation from pyarrow)
---
### 🧷 Notes
- Falls back to error if Polars is not installed.
- This is a non-breaking enhancement and does not affect existing functionality.
- Future expansion possible to support write or more Polars features.
---
🔁 Feedback welcome!
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,271,674,434
| 61,987
|
Fix warning for extra fields in read_csv with on_bad_lines callable
|
open
| 2025-07-29T01:42:35
| 2025-07-29T04:03:20
| null |
https://github.com/pandas-dev/pandas/pull/61987
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61987
|
https://github.com/pandas-dev/pandas/pull/61987
|
tisjayy
| 6
|
- [ ] closes #61837 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n",
"pre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n",
"\r\npre-commit.ci autofix\r\n\r\n"
] |
3,271,600,791
| 61,986
|
DOC: Improve docstrings in utility functions in pandas/core/common.py (lines 176–210)
|
closed
| 2025-07-29T00:36:13
| 2025-07-29T01:33:58
| 2025-07-29T01:33:53
|
https://github.com/pandas-dev/pandas/issues/61986
| true
| null | null |
eduardocamacho10
| 1
|
### Pandas version checks
- [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/)
### Location of the documentation
pandas/core/common.py (lines 176–210)
### Documentation problem
Several internal utility functionshave unclear or missing docstrings.
- `not_none` returns a generator, unlike others nearby which return booleans (not documented).
- Functions like `any_not_none`, `all_none`, and `any_none` lack parameter descriptions, return types
- `any_not_none` duplicates the logic of `any_none` but does not explain the inversion
### Suggested fix for documentation
To imrpove the docstrings for the following utility functions in `pandas/core/common.py`:
- Add return type clarification to `not_none` to explain that it returns a generator, unlike others in the section.
- For `any_not_none`, `all_none`, and similar functions, add full docstring structure with:
- Parameters section
- Returns section np
- Optional: refactor duplicated logic between `any_not_none` and `any_none`.
|
[
"Docs"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. While I personally do not think the identified functions require improvement, I wouldn't be strongly opposed to filling out these docstings. However, for internal functions as short as these are, I do not believe we need to take up space on the issue tracker with this. Closing."
] |
3,271,508,973
| 61,985
|
API: offsets.Day is always calendar-day
|
closed
| 2025-07-28T23:15:46
| 2025-08-12T01:04:03
| 2025-08-11T17:29:46
|
https://github.com/pandas-dev/pandas/pull/61985
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61985
|
https://github.com/pandas-dev/pandas/pull/61985
|
jbrockmendel
| 7
|
- [x] closes #44823
- [x] closes #55502
- [x] closes #41943
- [x] closes #51716
- [x] closes #35388
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Alternative to #55502 discussed at last week's dev meeting. This allows TimedeltaIndex.freq to be a`Day` even though it is not a Tick.
|
[
"Frequency"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"`/home/runner/work/pandas/pandas/doc/source/whatsnew/v3.0.0.rst:345: WARNING: Bullet list ends without a blank line; unexpected unindent. [docutils]`\r\n\r\nis stumping me. Who is our go-to person for debugging these?\r\n",
"Still seeing \r\n\r\n```\r\n/home/runner/work/pandas/pandas/doc/source/whatsnew/v3.0.0.rst:345: WARNING: Bullet list ends without a blank line; unexpected unindent. [docutils]\r\n```\r\n\r\nTried building the docs locally and jinja2 is complaining `ValueError: PackageLoader could not find a 'io/formats/templates' directory in the 'pandas' package.` so doing some yak-shaving.",
"Looks like there was even more whitespace I messed up. It's happy now.",
"Thanks @jbrockmendel. Good to finally have this change! ",
"Looks like we missed a usage of freq.nanos in the window code. I don't know that code too well. Does that need updating too?",
"Hmm I think we always treated `\"D\"` as 24 hours before this change, and you defined `Day.nanos` in this PR so that's probably why the tests were still passing (and that we might not having any rolling tests with DST?).\r\n\r\nI guess technically with this change `rolling(\"D\")` shouldn't work since `Day` isn't a fixed frequency anymore, but maybe we should keep allowing this case?\r\n\r\n",
"> and that we might not having any rolling tests with DST?\r\n\r\nlooks like we have one rolling test (test_rolling_datetime) with Day and tzaware self._on but i don't think it passes over a DST transition"
] |
3,271,225,087
| 61,984
|
MNT: simplify `cibuildwheel` configuration
|
closed
| 2025-07-28T20:18:54
| 2025-07-29T05:45:51
| 2025-07-28T22:06:12
|
https://github.com/pandas-dev/pandas/pull/61984
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61984
|
https://github.com/pandas-dev/pandas/pull/61984
|
neutrinoceros
| 1
|
follow up to https://github.com/pandas-dev/pandas/pull/61981#discussion_r2237723118
This reduce the maintenance burden for `cibuildwheel` config parameters:
- cibw takes `project.requires-python` into account for target selection, so there is no need for explicitly excluding unsupported versions
- using `test-extras` instead of `test-requires` avoids a repetition and keeps `project.optional-dependencies` as the one source of truth in this area
- [N/A] closes #xxxx (Replace xxxx with the GitHub issue number)
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
|
[
"Build"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks @neutrinoceros "
] |
3,270,881,574
| 61,983
|
ENH: Add Polars engine to read_csv (#61813)
|
closed
| 2025-07-28T18:04:56
| 2025-07-28T18:30:06
| 2025-07-28T18:30:06
|
https://github.com/pandas-dev/pandas/pull/61983
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61983
|
https://github.com/pandas-dev/pandas/pull/61983
|
abujabarmubarak
| 0
|
### What does this PR do?
This PR adds support for `engine="polars"` in the `pandas.read_csv()` function. It enables users to leverage the performance of the [Polars](https://www.pola.rs/) DataFrame engine when reading CSV files in pandas.
---
### Why is this needed?
This enhancement addresses issue #61813. Since Polars is a high-performance DataFrame library with fast CSV parsing, adding it as an engine allows pandas users to benefit from its speed while staying within the pandas API.
---
### What changes were made?
#### ✅ Added Polars Support in `_read()` Function
- Included a conditional block inside the `_read()` function in `pandas/io/parsers/readers.py` to handle `engine="polars"`
- This helps pandas use `polars.read_csv()` under the hood and convert the result to a pandas DataFrame using `.to_pandas()`
#### ✅ Updated Engine Validation
- Modified `_refine_defaults_read()` to accept `"polars"` as a valid engine
- This ensures pandas doesn’t raise a ValueError when `engine="polars"` is passed
#### ✅ Created a New Test File
- Created `test_read_csv_polars.py` inside `pandas/tests/io/parser/`
- The test verifies that using `engine="polars"` in `read_csv()` loads a simple CSV correctly
- Ensures code coverage and prevents future regressions
---
### How to use it?
```python
import pandas as pd
# Requires Polars to be installed
# pip install polars
df = pd.read_csv("example.csv", engine="polars")
print(df.head())
```
This allows pandas users to benefit from Polars' speed and memory efficiency while still using the familiar pandas API.
---
### Dependencies
Requires the user to have Polars installed:
```bash
pip install polars
```
If `polars` is not installed, the engine will raise an `ImportError` with instructions.
---
### Related Issues
Closes #61813
---
Let me know if any additional tests or validations are needed.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,270,151,696
| 61,982
|
BUG: Fix boolean column indexing for DataFrame (#61980)
|
closed
| 2025-07-28T14:29:12
| 2025-07-28T19:24:11
| 2025-07-28T19:24:11
|
https://github.com/pandas-dev/pandas/pull/61982
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61982
|
https://github.com/pandas-dev/pandas/pull/61982
|
Aniketsy
| 0
|
(#61980)
This Pr fixes:
Boolean column names in DataFrame indexing are now correctly treated as column labels, not boolean masks, unless the key is a valid mask for row selection.
Please let me know if my approach or fix needs any improvements . I’m open to feedback and happy to make changes based on suggestions.
|
[] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[] |
3,269,821,531
| 61,981
|
Bump pypa/cibuildwheel from 2.23.3 to 3.1.1
|
closed
| 2025-07-28T12:59:24
| 2025-07-28T23:44:29
| 2025-07-28T19:58:11
|
https://github.com/pandas-dev/pandas/pull/61981
| true
|
https://api.github.com/repos/pandas-dev/pandas/pulls/61981
|
https://github.com/pandas-dev/pandas/pull/61981
|
dependabot[bot]
| 3
|
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.23.3 to 3.1.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/releases">pypa/cibuildwheel's releases</a>.</em></p>
<blockquote>
<h2>v3.1.1</h2>
<ul>
<li>🐛 Fix a bug showing an incorrect wheel count at the end of execution, and misrepresenting test-only runs in the GitHub Action summary (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li>📚 Docs fix (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
</ul>
<h2>v3.1.0</h2>
<ul>
<li>🌟 CPython 3.14 wheels are now built by default - without the <code>"cpython-prerelease"</code> <code>enable</code> set. It's time to build and upload these wheels to PyPI! This release includes CPython 3.14.0rc1, which is guaranteed to be ABI compatible with the final release. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>) Free-threading is no longer experimental in 3.14, so you have to skip it explicitly with <code>'cp31?t-*'</code> if you don't support it yet. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2503">#2503</a>)</li>
<li>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#android">build wheels for Android</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>android</code> on Linux or macOS to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li>🌟 Adds Pyodide 0.28, which builds 3.13 wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2487">#2487</a>)</li>
<li>✨ Support for 32-bit <code>manylinux_2_28</code> (now a consistent default) and <code>manylinux_2_34</code> added (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2500">#2500</a>)</li>
<li>🛠 Improved summary, will also use markdown summary output on GHA (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2469">#2469</a>)</li>
<li>🛠 The riscv64 images now have a working default (as they are now part of pypy/manylinux), but are still experimental (and behind an <code>enable</code>) since you can't push them to PyPI yet (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li>🛠 Fixed a typo in the 3.9 MUSL riscv64 identifier (<code>cp39-musllinux_ricv64</code> -> <code>cp39-musllinux_riscv64</code>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2490">#2490</a>)</li>
<li>🛠 Mistyping <code>--only</code> now shows the correct possibilities, and even suggests near matches on Python 3.14+ (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2499">#2499</a>)</li>
<li>🛠 Only support one output from the repair step on linux like other platforms; auditwheel fixed this over four years ago! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2478">#2478</a>)</li>
<li>🛠 We now use pattern matching extensively (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2434">#2434</a>)</li>
<li>📚 We now have platform maintainers for our special platforms and interpreters! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2481">#2481</a>)</li>
</ul>
<h2>v3.0.1</h2>
<ul>
<li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li>
<li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li>
</ul>
<h2>v3.0.0</h2>
<p>See <a href="https://github.com/henryiii"><code>@henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p>
<ul>
<li>
<p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p>
</li>
<li>
<p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p>
</li>
<li>
<p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
<p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, and changes the working directory for tests. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p>
<ul>
<li>If this option is set, cibuildwheel will copy the files and folders specified in <code>test-sources</code> into the temporary directory we run from. This is required for iOS builds, but also useful for other platforms, as it allows you to avoid placeholders.</li>
<li>If this option is not set, behaviour matches v2.x - cibuildwheel will run the tests from a temporary directory, and you can use the <code>{project}</code> placeholder in the <code>test-command</code> to refer to the project directory. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>)</li>
</ul>
</li>
<li>
<p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p>
</li>
<li>
<p>✨ Improves support for Pyodide builds and adds the experimental <a href="https://cibuildwheel.pypa.io/en/stable/options/#pyodide-version"><code>pyodide-version</code></a> option, which allows you to specify the version of Pyodide to use for builds. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2002">#2002</a>)</p>
</li>
<li>
<p>✨ Add <code>pyodide-prerelease</code> <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enable</a> option, with an early build of 0.28 (Python 3.13). (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2431">#2431</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-environment"><code>test-environment</code></a> option, which allows you to set environment variables for the test command. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2388">#2388</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#xbuild-tools"><code>xbuild-tools</code></a> option, which allows you to specify tools safe for cross-compilation. Currently only used on iOS; will be useful for Android in the future. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2317">#2317</a>)</p>
</li>
<li>
<p>🛠 The default <a href="https://cibuildwheel.pypa.io/en/stable/options/#linux-image">manylinux image</a> has changed from <code>manylinux2014</code> to <code>manylinux_2_28</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2330">#2330</a>)</p>
</li>
<li>
<p>🛠 EOL images <code>manylinux1</code>, <code>manylinux2010</code>, <code>manylinux_2_24</code> and <code>musllinux_1_1</code> can no longer be specified by their shortname. The full OCI name can still be used for these images, if you wish. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2316">#2316</a>)</p>
</li>
<li>
<p>🛠 Invokes <code>build</code> rather than <code>pip wheel</code> to build wheels by default. You can control this via the <a href="https://cibuildwheel.pypa.io/en/stable/options/#build-frontend"><code>build-frontend</code></a> option. You might notice that you can see your build log output now! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2321">#2321</a>)</p>
</li>
<li>
<p>🛠 Build verbosity settings have been reworked to have consistent meanings between build backends when non-zero. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2339">#2339</a>)</p>
</li>
<li>
<p>🛠 Removed the <code>CIBW_PRERELEASE_PYTHONS</code> and <code>CIBW_FREE_THREADED_SUPPORT</code> options - these have been folded into the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code></a> option instead. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p>
</li>
<li>
<p>🛠 Build environments no longer have setuptools and wheel preinstalled. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2329">#2329</a>)</p>
</li>
<li>
<p>🛠 Use the standard Schema line for the integrated JSONSchema. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2433">#2433</a>)</p>
</li>
<li>
<p>⚠️ Dropped support for building Python 3.6 and 3.7 wheels. If you need to build wheels for these versions, use cibuildwheel v2.23.3 or earlier. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2282">#2282</a>)</p>
</li>
<li>
<p>⚠️ The minimum Python version required to run cibuildwheel is now Python 3.11. You can still build wheels for Python 3.8 and newer. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1912">#1912</a>)</p>
</li>
<li>
<p>⚠️ 32-bit Linux wheels no longer built by default - the <a href="https://cibuildwheel.pypa.io/en/stable/options/#archs">arch</a> was removed from <code>"auto"</code>. It now requires explicit <code>"auto32"</code>. Note that modern manylinux images (like the new default, <code>manylinux_2_28</code>) do not have 32-bit versions. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2458">#2458</a>)</p>
</li>
<li>
<p>⚠️ PyPy wheels no longer built by default, due to a change to our options system. To continue building PyPy wheels, you'll now need to set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> to <code>pypy</code> or <code>pypy-eol</code>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2095">#2095</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md">pypa/cibuildwheel's changelog</a>.</em></p>
<blockquote>
<h3>v3.1.1</h3>
<p><em>24 July 2025</em></p>
<ul>
<li>🐛 Fix a bug showing an incorrect wheel count at the end of execution, and misrepresenting test-only runs in the GitHub Action summary (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li>📚 Docs fix (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
</ul>
<h3>v3.1.0</h3>
<p><em>23 July 2025</em></p>
<ul>
<li>🌟 CPython 3.14 wheels are now built by default - without the <code>"cpython-prerelease"</code> <code>enable</code> set. It's time to build and upload these wheels to PyPI! This release includes CPython 3.14.0rc1, which is guaranteed to be ABI compatible with the final release. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>) Free-threading is no longer experimental in 3.14, so you have to skip it explicitly with <code>'cp31?t-*'</code> if you don't support it yet. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2503">#2503</a>)</li>
<li>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#android">build wheels for Android</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>android</code> on Linux or macOS to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li>🌟 Adds Pyodide 0.28, which builds 3.13 wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2487">#2487</a>)</li>
<li>✨ Support for 32-bit <code>manylinux_2_28</code> (now a consistent default) and <code>manylinux_2_34</code> added (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2500">#2500</a>)</li>
<li>🛠 Improved summary, will also use markdown summary output on GHA (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2469">#2469</a>)</li>
<li>🛠 The riscv64 images now have a working default (as they are now part of pypy/manylinux), but are still experimental (and behind an <code>enable</code>) since you can't push them to PyPI yet (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li>🛠 Fixed a typo in the 3.9 MUSL riscv64 identifier (<code>cp39-musllinux_ricv64</code> -> <code>cp39-musllinux_riscv64</code>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2490">#2490</a>)</li>
<li>🛠 Mistyping <code>--only</code> now shows the correct possibilities, and even suggests near matches on Python 3.14+ (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2499">#2499</a>)</li>
<li>🛠 Only support one output from the repair step on linux like other platforms; auditwheel fixed this over four years ago! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2478">#2478</a>)</li>
<li>🛠 We now use pattern matching extensively (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2434">#2434</a>)</li>
<li>📚 We now have platform maintainers for our special platforms and interpreters! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2481">#2481</a>)</li>
</ul>
<h3>v3.0.1</h3>
<p><em>5 July 2025</em></p>
<ul>
<li>🛠 Updates CPython 3.14 prerelease to 3.14.0b3 (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2471">#2471</a>)</li>
<li>✨ Adds a CPython 3.14 prerelease iOS build (only when prerelease builds are <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable">enabled</a>) (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2475">#2475</a>)</li>
</ul>
<h3>v3.0.0</h3>
<p><em>11 June 2025</em></p>
<p>See <a href="https://github.com/henryiii"><code>@henryiii</code></a>'s <a href="https://iscinumpy.dev/post/cibuildwheel-3-0-0/">release post</a> for more info on new features!</p>
<ul>
<li>
<p>🌟 Adds the ability to <a href="https://cibuildwheel.pypa.io/en/stable/platforms/#ios">build wheels for iOS</a>! Set the <a href="https://cibuildwheel.pypa.io/en/stable/options/#platform"><code>platform</code> option</a> to <code>ios</code> on a Mac with the iOS toolchain to try it out! (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2286">#2286</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2363">#2363</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2432">#2432</a>)</p>
</li>
<li>
<p>🌟 Adds support for the GraalPy interpreter! Enable for your project using the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a>. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/1538">#1538</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2411">#2411</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2414">#2414</a>)</p>
</li>
<li>
<p>✨ Adds CPython 3.14 support, under the <a href="https://cibuildwheel.pypa.io/en/stable/options/#enable"><code>enable</code> option</a> <code>cpython-prerelease</code>. This version of cibuildwheel uses 3.14.0b2. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
<p><em>While CPython is in beta, the ABI can change, so your wheels might not be compatible with the final release. For this reason, we don't recommend distributing wheels until RC1, at which point 3.14 will be available in cibuildwheel without the flag.</em> (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2390">#2390</a>)</p>
</li>
<li>
<p>✨ Adds the <a href="https://cibuildwheel.pypa.io/en/stable/options/#test-sources">test-sources option</a>, which copies files and folders into the temporary working directory we run tests from. (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2062">#2062</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2284">#2284</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2420">#2420</a>, <a href="https://redirect.github.com/pypa/cibuildwheel/issues/2437">#2437</a>)</p>
<p>This is particularly important for iOS builds, which do not support placeholders in the <code>test-command</code>, but can also be useful for other platforms.</p>
</li>
<li>
<p>✨ Adds <a href="https://cibuildwheel.pypa.io/en/stable/options/#dependency-versions"><code>dependency-versions</code></a> inline syntax (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2122">#2122</a>)</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pypa/cibuildwheel/commit/e6de07ed3921b51089aae6981989889cf1eddd0c"><code>e6de07e</code></a> Bump version: v3.1.1</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/2ca692b1e55a1f924bfb460099c9d7e015671a8d"><code>2ca692b</code></a> docs: iOS typo fix in docs (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2510">#2510</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/1ac7fa7f004958fbde774ee89523c446a5d99934"><code>1ac7fa7</code></a> fix: report defects in logs and HTML summaries (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2512">#2512</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/ffd835cef18fa11522f608fc0fa973b89f5ddc87"><code>ffd835c</code></a> Bump version: v3.1.0</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/3e2a9aa6e85824999f897fc2c060ca12a5113ef6"><code>3e2a9aa</code></a> fix: regenerate schema</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/10c727eed9fc962f75d33d472272e3ad78c3e707"><code>10c727e</code></a> feat: Python 3.14rc1 build by default (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2507">#2507</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/f628c9dd23fe6e263cb91cef755a51a0af3bcddc"><code>f628c9d</code></a> [pre-commit.ci] pre-commit autoupdate (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2505">#2505</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/0f487ee2cb00876d95290da49d04208c91237857"><code>0f487ee</code></a> feat: add support for building Android wheels (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2349">#2349</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/e2e24882d8422e974295b1b9079d4ce80a5098a4"><code>e2e2488</code></a> feat: add default riscv64 images (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2506">#2506</a>)</li>
<li><a href="https://github.com/pypa/cibuildwheel/commit/a8bff94dbb5f3a4a914e29cf8353c2f6f1b9ab8b"><code>a8bff94</code></a> [Bot] Update dependencies (<a href="https://redirect.github.com/pypa/cibuildwheel/issues/2504">#2504</a>)</li>
<li>Additional commits viewable in <a href="https://github.com/pypa/cibuildwheel/compare/v2.23.3...v3.1.1">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
</details>
|
[
"Build",
"CI",
"Dependencies"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"@mroeschke Could this change be backported to the `2.3.x` branch? cibuildwheel `3.1.1` will be a requirement for cp314 wheels.",
"Just noting that 2.3.x will likely not add additional Python version support as 2.3.x releases are only meant to address regressions and fixes for major pandas 3.0 features.\r\n\r\npandas 3.0 _may_ be the first pandas version to support 3.14",
"> Just noting that 2.3.x will likely not add additional Python version support as 2.3.x releases are only meant to address regressions and fixes for major pandas 3.0 features.\r\n> \r\n> pandas 3.0 _may_ be the first pandas version to support 3.14\r\n\r\nOh, I've started testing 3.14 for Home Assistant already and the test suite passes, including some (albeit very) limited tests with pandas `2.3.1`. Not sure how much effort it will actually take to make it fully compatible, I've seen there is some work in #61950.\r\n\r\nJust from a downstream package perspective, in general I prefer it if packages don't couple new Python version support with a new major revision / breaking changes. It just makes upgrading more difficult. _I'm aware that's often just the nature of things line up. Just wanted to share my experience._"
] |
3,269,122,506
| 61,980
|
BUG: Boolean Column Indexing Issue in Pandas
|
open
| 2025-07-28T09:55:14
| 2025-08-18T01:16:55
| null |
https://github.com/pandas-dev/pandas/issues/61980
| true
| null | null |
tanjt107
| 4
|
### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
data = {"A": [1, 2, 3], "B": [4, 5, 6], True: [7, 8, 9]}
df = pd.DataFrame(data)
cols = ["A"]
df[cols]
# A
# 0 1
# 1 2
# 2 3
cols = [True]
df[cols] # ValueError: Item wrong length 1 instead of 3.
```
### Issue Description
The issue arises when attempting to access a `pandas.DataFrame` using a list of boolean values as column names.
### Expected Behavior
```py
A
0 7
1 8
2 9
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.4
python-bits : 64
OS : Darwin
OS-release : 24.5.0
Version : Darwin Kernel Version 24.5.0: Tue Apr 22 19:54:49 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.2.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.0.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None
</details>
|
[
"Bug",
"Indexing",
"Needs Discussion"
] | 0
| 0
| 0
| 0
| 0
| 0
| 0
| 0
|
[
"Thanks for the report. Unfortunately because there are multiple acceptable inputs similar to `[True]` (Boolean mask or list of columns), pandas has to make some choice as to what takes priority. It isn't clear to me that we should do one or the other, and because of that I'd be hesitant to change behavior here.\n\nThis is a great example of why I think pandas should only accept strings as columns (but of course, that would involve a significant amount of API changes and backwards incompatibility issues).",
"-1 for the reasons above as well. `[True]` is ambiguous between a mask and a label, and changing the behavior risks breaking existing usage.\n\nThanks for raising this edge case though!",
"I'd expect this to work.\n\nNote `df[True]` works, `df.loc[:, True]` does not, nor does `df.loc[:, [True]]`\n\nThe real ambiguity is if the columns are all-bool and the user passes a matching-length mask that can be interpreted either way. I think the long-term API is a separate method for mask-based indexing.",
"> I think the long-term API is a separate method for mask-based indexing.\n\nI'd be +1 on restricting mask-based to a `filter` method."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.