status stringclasses 1 value | repo_name stringlengths 9 24 | repo_url stringlengths 28 43 | issue_id int64 1 104k | updated_files stringlengths 8 1.76k | title stringlengths 4 369 | body stringlengths 0 254k ⌀ | issue_url stringlengths 37 56 | pull_url stringlengths 37 54 | before_fix_sha stringlengths 40 40 | after_fix_sha stringlengths 40 40 | report_datetime timestamp[ns, tz=UTC] | language stringclasses 5 values | commit_datetime timestamp[us, tz=UTC] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/airflow | https://github.com/apache/airflow | 25,232 | ["airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/utils/databricks.py", "tests/providers/databricks/operators/test_databricks.py", "tests/providers/databricks/utils/databricks.py"] | enable_elastic_disk property incorrectly mapped when making a request to Databricks | ### Apache Airflow version
2.2.2
### What happened
When using `apache-airflow-providers-databricks` in version 2.2.0 I am sending a request to databricks to submit a job.
https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsCreate -> `api/2.0/jobs/runs/submit`
Databricks is expecting a boolean on a property `enable_elastic_disk` while `airflow-databricks-provider` sends a string.
```
new_cluster = {
"autoscale": {"min_workers": 2, "max_workers": 5},
"spark_version": "10.4.x-scala2.12",
"aws_attributes": {
"first_on_demand": 1,
"availability": "SPOT_WITH_FALLBACK",
"zone_id": "auto",
"spot_bid_price_percent": 100,
},
"enable_elastic_disk": True,
"driver_node_type_id": "r5a.large",
"node_type_id": "c5a.xlarge",
"cluster_source": "JOB",
}
```
And the property `enable_elastic_disk` is not set on databricks side. I did also the same request to databricks from a Postman and the property was set to `true` which means that the problem does not lie on databricks side.
```
{
"name": "test",
"tasks": [
{
"task_key": "test-task-key",
"notebook_task": {
"notebook_path": "path_to_notebook"
},
"new_cluster": {
"autoscale": {"min_workers": 1, "max_workers": 2},
"cluster_name": "",
"spark_version": "10.4.x-scala2.12",
"aws_attributes": {
"first_on_demand": 1,
"availability": "SPOT_WITH_FALLBACK",
"zone_id": "auto",
"spot_bid_price_percent": 100
},
"driver_node_type_id": "r5a.large",
"node_type_id": "c5a.xlarge",
"enable_elastic_disk": true,
"cluster_source": "JOB"
}
}
]
}
```
I have tried to find the problem and it apparently is this line. Before executing the line `enable_elastic_disk` is `True` of type boolean but after it becomes a string `'True'` which databricks does not parse.
https://github.com/apache/airflow/blob/1cb16d5588306fcb7177486dc60c1974ea3034d4/airflow/providers/databricks/operators/databricks.py#L381
### What you think should happen instead
After setting property `enable_elastic_disk` it should be propagated into databricks but it's not.
### How to reproduce
Try to run:
```
new_cluster = {
"autoscale": {"min_workers": 2, "max_workers": 5},
"spark_version": "10.4.x-scala2.12",
"aws_attributes": {
"first_on_demand": 1,
"availability": "SPOT_WITH_FALLBACK",
"zone_id": "auto",
"spot_bid_price_percent": 100,
},
"enable_elastic_disk": True,
"driver_node_type_id": "r5a.large",
"node_type_id": "c5a.xlarge",
"cluster_source": "JOB",
}
notebook_task = {
"notebook_path": f"/Repos/path_to_notebook"/main_asset_information",
"base_parameters": {"env": env},
}
asset_information = DatabricksSubmitRunOperator(
task_id="task_id"
databricks_conn_id="databricks",
new_cluster=new_cluster,
notebook_task=notebook_task,
)
```
Make sure airflow connection named `databricks` is set and check whether databricks has the property set.
After executing there is a need to check whether the property is set on databricks we can do it by using endpoint:
`https://DATABRICKS_HOST/api/2.1/jobs/runs/get?run_id=123`
### Operating System
MWWA
### Versions of Apache Airflow Providers
`apache-airflow-providers-databricks` in version 2.2.0
### Deployment
MWAA
### Deployment details
_No response_
### Anything else
That's a permanent and repeatable problem. It would be great if this fix could be attached to lower versions for example `2.2.1`, because I am not sure when AWS decides to upgrade to the latest airflow code and I am also not sure if installing higher versions of databricks provider on airflow `2.2.2` will not cause issues.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25232 | https://github.com/apache/airflow/pull/25394 | 5de11e1410b432d632e8c0d1d8ca0945811a56f0 | 0255a0a5e7b93f2daa3a51792cd38d19d6a373c0 | 2022-07-22T12:48:52Z | python | 2022-08-04T15:47:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,210 | ["airflow/datasets/manager.py", "airflow/models/dataset.py", "tests/datasets/test_manager.py", "tests/models/test_taskinstance.py"] | Many tasks updating dataset at once causes some of them to fail | ### Apache Airflow version
main (development)
### What happened
I have 16 dags which all update the same dataset. They're set to finish at the same time (when the seconds on the clock are 00). About three quarters of them behave as expected, but the other quarter fails with errors like:
```
[2022-07-21, 06:06:00 UTC] {standard_task_runner.py:97} ERROR - Failed to execute job 8 for task increment_source ((psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "dataset_dag_run_queue_pkey"
DETAIL: Key (dataset_id, target_dag_id)=(1, simple_dataset_sink) already exists.
[SQL: INSERT INTO dataset_dag_run_queue (dataset_id, target_dag_id, created_at) VALUES (%(dataset_id)s, %(target_dag_id)s, %(created_at)s)]
[parameters: {'dataset_id': 1, 'target_dag_id': 'simple_dataset_sink', 'created_at': datetime.datetime(2022, 7, 21, 6, 6, 0, 131730, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 375)
```
I've prepaired a gist with the details: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
### What you think should happen instead
All dags should succeed
### How to reproduce
See this gist: https://gist.github.com/MatrixManAtYrService/b5e58be0949eab9180608d0760288d4d
Summary: Unpause all of the dags which we expect to collide, wait two minutes. Some will have collided.
### Operating System
docker/debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
`astro dev start` targeting commit: cff7d9194f549d801947f47dfce4b5d6870bfaaa
be sure to have `pause` in requirements.txt
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25210 | https://github.com/apache/airflow/pull/26103 | a2db8fcb7df1a266e82e17b937c9c1cf01a16a42 | 4dd628c26697d759aebb81a7ac2fe85a79194328 | 2022-07-21T06:28:32Z | python | 2022-09-01T20:28:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,200 | ["airflow/models/baseoperator.py", "airflow/models/dagrun.py", "airflow/models/taskinstance.py", "airflow/ti_deps/dep_context.py", "airflow/ti_deps/deps/trigger_rule_dep.py", "tests/models/test_dagrun.py", "tests/models/test_taskinstance.py", "tests/ti_deps/deps/test_trigger_rule_dep.py"] | DAG Run fails when chaining multiple empty mapped tasks | ### Apache Airflow version
2.3.3 (latest released)
### What happened
On Kubernetes Executor and Local Executor (others not tested) a significant fraction of the DAG Runs of a DAG that has two consecutive mapped tasks which are are being passed an empty list are marked as failed when all tasks are either succeeding or being skipped.

### What you think should happen instead
The DAG Run should be marked success.
### How to reproduce
Run the following DAG on Kubernetes Executor or Local Executor.
The real world version of this DAG has several mapped tasks that all point to the same list, and that list is frequently empty. I have made a minimal reproducible example.
```py
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
with DAG(dag_id="break_mapping", start_date=datetime(2022, 3, 4)) as dag:
@task
def add_one(x: int):
return x + 1
@task
def say_hi():
print("Hi")
added_values = add_one.expand(x=[])
added_more_values = add_one.expand(x=[])
say_hi() >> added_values
added_values >> added_more_values
```
### Operating System
Debian Bullseye
### Versions of Apache Airflow Providers
```
apache-airflow-providers-amazon==1!4.0.0
apache-airflow-providers-cncf-kubernetes==1!4.1.0
apache-airflow-providers-elasticsearch==1!4.0.0
apache-airflow-providers-ftp==1!3.0.0
apache-airflow-providers-google==1!8.1.0
apache-airflow-providers-http==1!3.0.0
apache-airflow-providers-imap==1!3.0.0
apache-airflow-providers-microsoft-azure==1!4.0.0
apache-airflow-providers-mysql==1!3.0.0
apache-airflow-providers-postgres==1!5.0.0
apache-airflow-providers-redis==1!3.0.0
apache-airflow-providers-slack==1!5.0.0
apache-airflow-providers-sqlite==1!3.0.0
apache-airflow-providers-ssh==1!3.0.0
```
### Deployment
Astronomer
### Deployment details
Local was tested on docker compose (from astro-cli)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25200 | https://github.com/apache/airflow/pull/25995 | 1e19807c7ea0d7da11b224658cd9a6e3e7a14bc5 | 5697e9fdfa9d5af2d48f7037c31972c2db1f4397 | 2022-07-20T20:33:42Z | python | 2022-09-01T12:03:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,179 | ["airflow/providers/apache/livy/hooks/livy.py", "airflow/providers/apache/livy/operators/livy.py", "airflow/providers/apache/livy/sensors/livy.py", "tests/providers/apache/livy/hooks/test_livy.py"] | Add auth_type to LivyHook | ### Apache Airflow Provider(s)
apache-livy
### Versions of Apache Airflow Providers
apache-airflow-providers-apache-livy==3.0.0
### Apache Airflow version
2.3.3 (latest released)
### Operating System
Ubuntu 18.04
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### What happened
This is a feature request as apposed to an issue.
I want to use the `LivyHook` to communicate with a Kerberized cluster.
As such, I am using `requests_kerberos.HTTPKerberosAuth` as the authentication type.
Currently, I am implementing this as follows:
```python
from airflow.providers.apache.livy.hooks.livy import LivyHook as NativeHook
from requests_kerberos import HTTPKerberosAuth as NativeAuth
class HTTPKerberosAuth(NativeAuth):
def __init__(self, *ignore_args, **kwargs):
super().__init__(**kwargs)
class LivyHook(NativeHook):
def __init__(self, auth_type=HTTPKerberosAuth, **kwargs):
super().__init__(**kwargs)
self.auth_type = auth_type
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25179 | https://github.com/apache/airflow/pull/25183 | ae7bf474109410fa838ab2728ae6d581cdd41808 | 7d3e799f7e012d2d5c1fe24ce2bea01e68a5a193 | 2022-07-20T10:09:03Z | python | 2022-08-07T13:49:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,165 | ["airflow/decorators/base.py", "tests/decorators/test_mapped.py", "tests/utils/test_task_group.py"] | Dynamic Tasks inside of TaskGroup do not have group_id prepended to task_id | ### Apache Airflow version
2.3.3 (latest released)
### What happened
As the title states, if you have dynamically mapped tasks inside of a `TaskGroup`, those tasks do not get the `group_id` prepended to their respective `task_id`s. This causes at least a couple of undesirable side effects:
1. Task names are truncated in Grid/Graph* View. The tasks below are named `plus_one` and `plus_two`:


Presumably this is because the UI normally strips off the `group_id` prefix.
\* Graph View was very inconsistent in my experience. Sometimes the names are truncated, and sometimes they render correctly. I haven't figured out the pattern behind this behavior.
2. Duplicate `task_id`s between groups result in a `airflow.exceptions.DuplicateTaskIdFound`, even if the `group_id` would normally disambiguate them.
### What you think should happen instead
These dynamic tasks inside of a group should have the `group_id` prepended for consistent behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.utils.task_group import TaskGroup
@dag(
start_date=datetime.datetime(2022, 7, 19),
schedule_interval=None,
)
def test_dag():
with TaskGroup(group_id='group'):
@task
def plus_one(x: int):
return x + 1
plus_one.expand(x=[1, 2, 3])
with TaskGroup(group_id='ggg'):
@task
def plus_two(x: int):
return x + 2
plus_two.expand(x=[1, 2, 3])
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
Possibly related: #12309
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25165 | https://github.com/apache/airflow/pull/26081 | 6a8f0167436b8b582aeb92a93d3f69d006b36f7b | 9c4ab100e5b069c86bd00bb7860794df0e32fc2e | 2022-07-19T18:58:28Z | python | 2022-09-01T08:46:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,163 | ["airflow/providers/common/sql/operators/sql.py", "tests/providers/common/sql/operators/test_sql.py"] | Common-SQL Operators Various Bugs | ### Apache Airflow Provider(s)
common-sql
### Versions of Apache Airflow Providers
`apache-airflow-providers-common-sql==1.0.0`
### Apache Airflow version
2.3.3 (latest released)
### Operating System
macOS Monterey 12.3.1
### Deployment
Astronomer
### Deployment details
_No response_
### What happened
- `SQLTableCheckOperator` builds multiple checks in such a way that if two or more checks are given, and one is not a fully aggregated statement, then the SQL fails as it is missing a `GROUP BY` clause.
- `SQLColumnCheckOperator` provides only the last SQL query built from the columns, so when a check fails, it will only give the correct SQL in the exception statement by coincidence.
### What you think should happen instead
- Multiple checks should not need a `GROUP BY` clause
- Either the correct SQL statement, or no SQL statement, should be returned in the exception message.
### How to reproduce
For the `SQLTableCheckOperator`, using the operator like so:
```
table_cheforestfire_costs_table_checkscks = SQLTableCheckOperator(
task_id="forestfire_costs_table_checks",
table=SNOWFLAKE_FORESTFIRE_COST_TABLE,
checks={
"row_count_check": {"check_statement": "COUNT(*) = 9"},
"total_cost_check": {"check_statement": "land_damage_cost + property_damage_cost + lost_profits_cost = total_cost"}
}
)
```
For the `SQLColumnCheckOperator`, using the operator like so:
```
cost_column_checks = SQLColumnCheckOperator(
task_id="cost_column_checks",
table=SNOWFLAKE_COST_TABLE,
column_mapping={
"ID": {"null_check": {"equal_to": 0}},
"LAND_DAMAGE_COST": {"min": {"geq_to": 0}},
"PROPERTY_DAMAGE_COST": {"min": {"geq_to": 0}},
"LOST_PROFITS_COST": {"min": {"geq_to": 0}},
}
)
```
and ensuring that any of the `ID`, `LAND_DAMAGE_COST`, or `PROPERTY_DAMAGE_COST` checks fail.
An example DAG with the correct environment and data can be found [here](https://github.com/astronomer/airflow-data-quality-demo/blob/main/dags/snowflake_examples/complex_snowflake_transform.py).
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25163 | https://github.com/apache/airflow/pull/25164 | d66e427c4d21bc479caa629299a786ca83747994 | be7cb1e837b875f44fcf7903329755245dd02dc3 | 2022-07-19T18:18:01Z | python | 2022-07-22T14:01:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,149 | ["airflow/models/dagbag.py", "airflow/www/security.py", "tests/models/test_dagbag.py", "tests/www/views/test_views_home.py"] | DAG.access_control can't sync when clean access_control | ### Apache Airflow version
2.3.3 (latest released)
### What happened
I change my DAG from
```python
with DAG(
'test',
access_control={'team':{'can_edit','can_read'}},
) as dag:
...
```
to
```python
with DAG(
'test',
) as dag:
...
```
Remove `access_control` arguments, Scheduler can't sync permissions to db.
If we write code like this,
```python
with DAG(
'test',
access_control = {'team': {}}
) as dag:
...
```
It works.
### What you think should happen instead
It should clear permissions to `test` DAG on Role `team`.
I think we should give a consistent behaviour of permissions sync. If we give `access_control` argument, permissions assigned in Web will clear when we update DAG file.
### How to reproduce
_No response_
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
```
airflow-code-editor==5.2.2
apache-airflow==2.3.3
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-ftp==3.0.0
apache-airflow-providers-http==3.0.0
apache-airflow-providers-imap==3.0.0
apache-airflow-providers-microsoft-psrp==2.0.0
apache-airflow-providers-microsoft-winrm==3.0.0
apache-airflow-providers-mysql==3.0.0
apache-airflow-providers-redis==3.0.0
apache-airflow-providers-samba==4.0.0
apache-airflow-providers-sftp==3.0.0
apache-airflow-providers-sqlite==3.0.0
apache-airflow-providers-ssh==3.0.0
```
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25149 | https://github.com/apache/airflow/pull/30340 | 97ad7cee443c7f4ee6c0fbaabcc73de16f99a5e5 | 2c0c8b8bfb5287e10dc40b73f326bbf9a0437bb1 | 2022-07-19T09:37:48Z | python | 2023-04-26T14:11:14Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,138 | ["airflow/providers/amazon/aws/hooks/sqs.py", "airflow/providers/amazon/aws/operators/sqs.py", "tests/providers/amazon/aws/operators/test_sqs.py"] | SQSPublishOperator should allow sending messages to a FIFO Queue | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
`apache-airflow-providers-amazon==4.1.0`
### Apache Airflow version
2.2.2
### Operating System
Amazon Linux 2
### Deployment
MWAA
### Deployment details
_No response_
### What happened
The current state of the [SQSPublishOperator](https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/sqs.py) does not support FIFO queues because:
From the Boto3 documentation, a FIFO queue requires a MessageGroupId parameter -> [SQS.Client.send_message](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sqs.html#SQS.Client.send_message)
The [SQSHook](https://airflow.apache.org/docs/apache-airflow-providers-amazon/2.4.0/_modules/airflow/providers/amazon/aws/hooks/sqs.html#SQSHook) calls the boto3 client with just these parameters:
```
.send_message(
QueueUrl=queue_url,
MessageBody=message_body,
DelaySeconds=delay_seconds,
MessageAttributes=message_attributes or {},
)
```
so if `queue_url` happens to be a FIFO queue, then that API call will fail and therefore the Airflow Operator will fail too.
### What you think should happen instead
The `SQSPublishOperator` should support FIFO queues by accepting the `MessageGroupId` as a parameter.
### How to reproduce
1. Setup a FIFO SQS queue
2. Setup an Airflow operator:
```
task1 = SQSPublishOperator(
task_id = task1,
sqs_queue = "url",
message_content = "Message string",
delay_seconds = 0,
)
```
### Anything else
N/A
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25138 | https://github.com/apache/airflow/pull/25171 | 6839813bc75e62e154fe4163ffa1bda1c8e8cc8f | 47b72056c46931aef09d63d6d80fbdd3d9128b09 | 2022-07-18T19:06:18Z | python | 2022-07-21T16:25:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,103 | ["airflow/api_connexion/openapi/v1.yaml", "tests/api_connexion/endpoints/test_variable_endpoint.py"] | API `variables/{variable_key}` request fails if key has character `/` | ### Apache Airflow version
2.3.2
### What happened
Created a variable e.g. `a/variable` and couldn't get or delete it
### What you think should happen instead
i shouldn't've been allowed to create a variable with `/`, or the GET and DELETE should work
### How to reproduce


```
DELETE /variables/{variable_key}
GET /variables/{variable_key}
```
create a variable with `/`, and then try and get it. the get will 404, even after html escape. delete also fails
`GET /variables/` works just fine
### Operating System
astro
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25103 | https://github.com/apache/airflow/pull/25774 | 98aac5dc282b139f0e726aac512b04a6693ba83d | a1beede41fb299b215f73f987a572c34f628de36 | 2022-07-15T21:22:11Z | python | 2022-08-18T06:08:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,095 | ["airflow/models/taskinstance.py", "airflow/models/taskreschedule.py", "airflow/serialization/serialized_objects.py", "airflow/ti_deps/deps/ready_to_reschedule.py", "tests/models/test_taskinstance.py", "tests/serialization/test_dag_serialization.py", "tests/ti_deps/deps/test_ready_to_reschedule_dep.py"] | Dynamically mapped sensor with mode='reschedule' fails with violated foreign key constraint | ### Apache Airflow version
2.3.3 (latest released)
### What happened
If you are using [Dynamic Task Mapping](https://airflow.apache.org/docs/apache-airflow/stable/concepts/dynamic-task-mapping.html) to map a Sensor with `.partial(mode='reschedule')`, and if that sensor fails its poke condition even once, the whole sensor task will immediately die with an error like:
```
[2022-07-14, 10:45:05 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 19 for task check_reschedule ((sqlite3.IntegrityError) FOREIGN KEY constraint failed
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: ('check_reschedule', 'test_dag', 'manual__2022-07-14T20:44:02.708517+00:00', -1, 1, '2022-07-14 20:45:05.874988', '2022-07-14 20:45:05.900895', 0.025907, '2022-07-14 20:45:10.898820')]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2973372)
```
A similar error arises when using a Postgres backend:
```
[2022-07-14, 11:09:22 CDT] {standard_task_runner.py:92} ERROR - Failed to execute job 17 for task check_reschedule ((psycopg2.errors.ForeignKeyViolation) insert or update on table "task_reschedule" violates foreign key constraint "task_reschedule_ti_fkey"
DETAIL: Key (dag_id, task_id, run_id, map_index)=(test_dag, check_reschedule, manual__2022-07-14T21:08:13.462782+00:00, -1) is not present in table "task_instance".
[SQL: INSERT INTO task_reschedule (task_id, dag_id, run_id, map_index, try_number, start_date, end_date, duration, reschedule_date) VALUES (%(task_id)s, %(dag_id)s, %(run_id)s, %(map_index)s, %(try_number)s, %(start_date)s, %(end_date)s, %(duration)s, %(reschedule_date)s) RETURNING task_reschedule.id]
[parameters: {'task_id': 'check_reschedule', 'dag_id': 'test_dag', 'run_id': 'manual__2022-07-14T21:08:13.462782+00:00', 'map_index': -1, 'try_number': 1, 'start_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 417922, tzinfo=Timezone('UTC')), 'end_date': datetime.datetime(2022, 7, 14, 21, 9, 22, 464495, tzinfo=Timezone('UTC')), 'duration': 0.046573, 'reschedule_date': datetime.datetime(2022, 7, 14, 21, 9, 27, 458623, tzinfo=Timezone('UTC'))}]
(Background on this error at: https://sqlalche.me/e/14/gkpj); 2983150)
```
`mode='poke'` seems to behave as expected. As far as I can tell, this affects all Sensor types.
### What you think should happen instead
This combination of features should work without error.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag, task
from airflow.sensors.bash import BashSensor
@dag(
start_date=datetime.datetime(2022, 7, 14),
schedule_interval=None,
)
def test_dag():
@task
def get_tasks():
return ['(($RANDOM % 2 == 0))'] * 10
tasks = get_tasks()
BashSensor.partial(task_id='check_poke', mode='poke', poke_interval=5).expand(bash_command=tasks)
BashSensor.partial(task_id='check_reschedule', mode='reschedule', poke_interval=5).expand(bash_command=tasks)
dag = test_dag()
if __name__ == '__main__':
dag.cli()
```
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25095 | https://github.com/apache/airflow/pull/25594 | 84718f92334b7e43607ab617ef31f3ffc4257635 | 5f3733ea310b53a0a90c660dc94dd6e1ad5755b7 | 2022-07-15T13:35:48Z | python | 2022-08-11T07:30:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,092 | ["airflow/providers/microsoft/mssql/hooks/mssql.py", "tests/providers/microsoft/mssql/hooks/test_mssql.py"] | MsSqlHook.get_sqlalchemy_engine uses pyodbc instead of pymssql | ### Apache Airflow Provider(s)
microsoft-mssql
### Versions of Apache Airflow Providers
apache-airflow-providers-microsoft-mssql==2.0.1
### Apache Airflow version
2.2.2
### Operating System
Ubuntu 20.04
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
`MsSqlHook.get_sqlalchemy_engine` uses the default mssql driver: `pyodbc` instead of `pymssql`.
- If pyodbc is installed: we get `sqlalchemy.exc.InterfaceError: (pyodbc.InterfaceError)`
- Otherwise we get: `ModuleNotFoundError`
PS: Looking at the code it should still apply up to provider version 3.0.0 (lastest version).
### What you think should happen instead
The default driver used by `sqlalchemy.create_engine` for mssql is `pyodbc`.
To use `pymssql` with `create_engine` we need to have the uri start with `mssql+pymssql://` (currently the hook uses `DBApiHook.get_uri` which starts with `mssql://`.
### How to reproduce
```python
>>> from contextlib import closing
>>> from airflow.providers.microsoft.mssql.hooks.mssql import MsSqlHook
>>>
>>> hook = MsSqlHook()
>>> with closing(hook.get_sqlalchemy_engine().connect()) as c:
>>> with closing(c.execute("SELECT SUSER_SNAME()")) as res:
>>> r = res.fetchone()
```
Will raise an exception due to the wrong driver being used.
### Anything else
Demo for sqlalchemy default mssql driver choice:
```bash
# pip install sqlalchemy
... Successfully installed sqlalchemy-1.4.39
# pip install pymssql
... Successfully installed pymssql-2.2.5
```
```python
>>> from sqlalchemy import create_engine
>>> create_engine("mssql://test:pwd@test:1433")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in create_engine
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/deprecations.py", line 309, in warned
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/create.py", line 560, in create_engine
dbapi = dialect_cls.dbapi(**dbapi_args)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/connectors/pyodbc.py", line 43, in dbapi
return __import__("pyodbc")
ModuleNotFoundError: No module named 'pyodbc'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25092 | https://github.com/apache/airflow/pull/25185 | a01cc5b0b8e4ce3b24970d763e4adccfb4e69f09 | df5a54d21d6991d6cae05c38e1562da2196e76aa | 2022-07-15T12:42:02Z | python | 2022-08-05T15:41:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,090 | ["airflow/jobs/scheduler_job.py", "airflow/models/dag.py", "airflow/timetables/base.py", "airflow/timetables/simple.py", "airflow/www/views.py", "newsfragments/25090.significant.rst"] | More natural sorting of DAG runs in the grid view | ### Apache Airflow version
2.3.2
### What happened
Dag with schedule to run once every hour.
Dag was started manually at 12:44, lets call this run 1
At 13:00 the scheduled run started, lets call this run 2. It appears before run 1 in the grid view.
See attached screenshot

### What you think should happen instead
Dags in grid view should appear in the order they are started.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
apache-airflow==2.3.2
apache-airflow-client==2.1.0
apache-airflow-providers-celery==3.0.0
apache-airflow-providers-cncf-kubernetes==4.0.2
apache-airflow-providers-docker==3.0.0
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-postgres==5.0.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25090 | https://github.com/apache/airflow/pull/25633 | a1beede41fb299b215f73f987a572c34f628de36 | 36eea1c8e05a6791d144e74f4497855e35baeaac | 2022-07-15T11:16:35Z | python | 2022-08-18T06:28:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,036 | ["airflow/example_dags/example_datasets.py", "airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Test that dataset not updated when task skipped | the AIP specifies that when a task is skipped, that we don’t mark the dataset as “updated”. we should simply add a test that verifies that this is what happens (and make changes if necessary)
@blag, i tried to make this an issue so i could assign to you but can't. anyway, can reference in PR with `closes` | https://github.com/apache/airflow/issues/25036 | https://github.com/apache/airflow/pull/25086 | 808035e00aaf59a8012c50903a09d3f50bd92ca4 | f0c9ac9da6db3a00668743adc9b55329ec567066 | 2022-07-13T19:31:16Z | python | 2022-07-19T03:43:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,033 | ["airflow/models/dag.py", "airflow/www/templates/airflow/dag.html", "airflow/www/templates/airflow/dags.html", "airflow/www/views.py", "tests/models/test_dag.py", "tests/www/views/test_views_base.py"] | next run should show deps fulfillment e.g. 0 of 3 | on dags page (i.e. the home page) we have a "next run" column. for dataset-driven dags, since we can't know for certain when it will be, we could instead show how many deps are fulfilled, e.g. `0 of 1` and perhaps make it a link to the datasets that the dag is dependened on.
here's a sample query that returns the dags which _are_ ready to run. but for this feature you'd need to get the num deps fulfilled and the total num deps.
```python
# these dag ids are triggered by datasets, and they are ready to go.
dataset_triggered_dag_info_list = {
x.dag_id: (x.first_event_time, x.last_event_time)
for x in session.query(
DatasetDagRef.dag_id,
func.max(DDRQ.created_at).label('last_event_time'),
func.max(DDRQ.created_at).label('first_event_time'),
)
.join(
DDRQ,
and_(
DDRQ.dataset_id == DatasetDagRef.dataset_id,
DDRQ.target_dag_id == DatasetDagRef.dag_id,
),
isouter=True,
)
.group_by(DatasetDagRef.dag_id)
.having(func.count() == func.sum(case((DDRQ.target_dag_id.is_not(None), 1), else_=0)))
.all()
}
``` | https://github.com/apache/airflow/issues/25033 | https://github.com/apache/airflow/pull/25141 | 47b72056c46931aef09d63d6d80fbdd3d9128b09 | 03a81b66de408631147f9353de6ffd3c1df45dbf | 2022-07-13T19:19:26Z | python | 2022-07-21T18:28:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,019 | ["airflow/providers/amazon/aws/log/cloudwatch_task_handler.py", "airflow/providers/amazon/provider.yaml", "docs/apache-airflow-providers-amazon/index.rst", "generated/provider_dependencies.json", "tests/providers/amazon/aws/log/test_cloudwatch_task_handler.py"] | update watchtower version in amazon provider | ### Description
there is limitation to version 2
https://github.com/apache/airflow/blob/809d95ec06447c9579383d15136190c0963b3c1b/airflow/providers/amazon/provider.yaml#L48
### Use case/motivation
using up to date version of the library
### Related issues
didnt find
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25019 | https://github.com/apache/airflow/pull/34747 | 7764a51ac9b021a77a57707bc7e750168e9e0da0 | c01abd1c2eed8f60fec5b9d6cc0232b54efa52de | 2022-07-13T09:37:45Z | python | 2023-10-06T14:35:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 25,007 | ["airflow/www/package.json"] | Invalid `package.json` file in `airflow/www` | ### Apache Airflow version
2.3.3 (latest released)
### What happened
According to npm docs fields `name` and `version` are required in `package.json`, but are not present in `airflow/www/package.json`
See: https://docs.npmjs.com/creating-a-package-json-file#required-name-and-version-fields
This can confuse some build tools, but also is just in incorrect format.
### What you think should happen instead
The fields `name` and `version` should be defined even if they contain just dummy values - we don't need real values since the package is not published in npm registry.
### How to reproduce
_No response_
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/25007 | https://github.com/apache/airflow/pull/25065 | 7af840f65813404122702c5511ec67c3b952b3c3 | 1fd59d61decdc1d7e493eca80a629d02533a4ba0 | 2022-07-12T19:56:34Z | python | 2022-07-14T17:34:07Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,996 | ["airflow/models/dag.py", "airflow/models/taskmixin.py", "tests/models/test_dag.py"] | Airflow doesn't set default task group while calling dag.add_tasks | ### Apache Airflow version
2.3.3 (latest released)
### What happened
Airflow set default task group while creating operator if dag parameter is set
https://github.com/apache/airflow/blob/main/airflow/models/baseoperator.py#L236
However, It doesn't set the default task group while adding a task using dag.add_task function
https://github.com/apache/airflow/blob/main/airflow/models/dag.py#L2179
This broke the code at line no
https://github.com/apache/airflow/blob/main/airflow/models/taskmixin.py#L312 and getting the error Cannot check for mapped dependants when not attached to a DAG.
Please add below line in dag.add_task function also:
if dag:
task_group = TaskGroupContext.get_current_task_group(dag)
if task_group:
task_id = task_group.child_id(task_id)
### What you think should happen instead
It should not break if task is added using dag.add_task
### How to reproduce
don't dag parameter while creating operator object. add task using add_task in dag.
### Operating System
Any
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24996 | https://github.com/apache/airflow/pull/25000 | 45e5150714e0a5a8e82e3fa6d0b337b92cbeb067 | ce0a6e51c2d4ee87e008e28897b2450778b51003 | 2022-07-12T11:28:04Z | python | 2022-08-05T15:17:38Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,953 | ["airflow/providers/oracle/example_dags/__init__.py", "airflow/providers/oracle/example_dags/example_oracle.py", "docs/apache-airflow-providers-oracle/index.rst", "docs/apache-airflow-providers-oracle/operators/index.rst"] | oracle hook _map_param() incorrect | ### Apache Airflow Provider(s)
oracle
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.3 (latest released)
### Operating System
OEL 7.6
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
[_map_param()](https://github.com/apache/airflow/blob/main/airflow/providers/oracle/hooks/oracle.py#L36) function from Oracle hook has an incorrect check of types:
```
PARAM_TYPES = {bool, float, int, str}
def _map_param(value):
if value in PARAM_TYPES:
# In this branch, value is a Python type; calling it produces
# an instance of the type which is understood by the Oracle driver
# in the out parameter mapping mechanism.
value = value()
return value
```
`if value in PARAM_TYPES` never gets True for all the mentioned variables types:
```
PARAM_TYPES = {bool, float, int, str}
>>> "abc" in PARAM_TYPES
False
>>> 123 in PARAM_TYPES
False
>>> True in PARAM_TYPES
False
>>> float(5.5) in PARAM_TYPES
False
```
The correct condition would be `if type(value) in PARAM_TYPES`
**But**, if we only fix this condition, next in positive case (type(value) in PARAM_TYPES = True) one more issue occurs with `value = value()`
`bool`, `float`, `int` or `str` are not callable
`TypeError: 'int' object is not callable`
This line is probaby here for passing a python callable into sql statement of procedure params in tasks, is it? If so, need to correct:
`if type(value) not in PARAM_TYPES`
Here is the full fix:
```
def _map_param(value):
if type(value) not in PARAM_TYPES:
value = value()
return value
```
Next casses are tested:
```
def oracle_callable(n=123):
return n
def oracle_pass():
return 123
task1 = OracleStoredProcedureOperator( task_id='task1', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable} )
task2 = OracleStoredProcedureOperator( task_id='task2', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable()} )
task3 = OracleStoredProcedureOperator( task_id='task3', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oracle_callable(456)} )
task4 = OracleStoredProcedureOperator( task_id='task4', oracle_conn_id='oracle_conn', procedure='AIRFLOW_TEST',
parameters={'var':oacle_pass} )
```
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24953 | https://github.com/apache/airflow/pull/30979 | 130b6763db364426d1d794641b256d7f2ce0b93d | edebfe3f2f2c7fc2b6b345c6bc5f3a82e7d47639 | 2022-07-10T23:01:34Z | python | 2023-05-09T18:32:15Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,938 | ["airflow/providers/databricks/operators/databricks.py"] | Add support for dynamic databricks connection id | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==3.0.0 # Latest
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Linux
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
_No response_
### What you think should happen instead
### Motivation
In a single airflow deployment, we are looking to have the ability to support multiple databricks connections ( `databricks_conn_id`) at runtime. This can be helpful to run the same DAG against multiple testing lanes(a.k.a. different development/testing Databricks environments).
### Potential Solution
We can pass the connection id via the Airflow DAG run configuration at runtime. For this, `databricks_conn_id` is required to be a templated field.
### How to reproduce
Minor enhancement/new feature
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24938 | https://github.com/apache/airflow/pull/24945 | 7fc5e0b24a8938906ad23eaa1262c9fb74ee2df1 | 8dfe7bf5ff090a675353a49da21407dffe2fc15e | 2022-07-09T07:55:53Z | python | 2022-07-11T14:47:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,936 | ["airflow/example_dags/example_dag_decorator.py", "airflow/example_dags/example_sla_dag.py", "airflow/models/dag.py", "docs/spelling_wordlist.txt"] | Type hints for taskflow @dag decorator | ### Description
I find no type hints when write a DAG use TaskFlowApi. `dag` and `task` decorator is a simple wrapper without detail arguments provide in docstring.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24936 | https://github.com/apache/airflow/pull/25044 | 61fc4899d71821fd051944d5d9732f7d402edf6c | be63c36bf1667c8a420d34e70e5a5efd7ca42815 | 2022-07-09T03:25:14Z | python | 2022-07-15T01:29:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,921 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Add options to Docker Operator | ### Description
I'm trying to add options like log-opt max-size 5 and I can't.
### Use case/motivation
I'm working in Hummingbot and I would like to offer the community a system to manage multiple bots, rebalance portfolio, etc. Our system needs a terminal to execute commands so currently I'm not able to use airflow to accomplish this task.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24921 | https://github.com/apache/airflow/pull/26653 | fd27584b3dc355eaf0c0cd7a4cd65e0e580fcf6d | 19d6f54704949d017b028e644bbcf45f5b53120b | 2022-07-08T12:01:04Z | python | 2022-09-27T14:42:37Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,919 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | Send default email if file "html_content_template" not found | ### Apache Airflow version
2.3.2 (latest released)
### What happened
I created a new email template to be sent when there are task failures. I accidentally added the path to the `[email] html_content_template` and `[email] subject_template` with a typo and no email was sent. The task's log is the following:
```
Traceback (most recent call last):
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1942, in handle_failure
self.email_alert(error, task)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2323, in email_alert
subject, html_content, html_content_err = self.get_email_subject_content(exception, task=task)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2315, in get_email_subject_content
subject = render('subject_template', default_subject)
File "/home/user/.conda/envs/airflow/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2311, in render
with open(path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/airflow/config/templates/email_failure_subject.tmpl'
```
I've looked the TaskInstance class (https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py).
I've seen that the `render` function (https://github.com/apache/airflow/blob/bcf2c418d261c6244e60e4c2d5de42b23b714bd1/airflow/models/taskinstance.py#L2271) has a `content` parameter, which is not used inside.
I guess the solution to this bug is simple: just add a `try - catch` block and return the default content in the `catch` part.
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
CentOS Linux 8
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
Conda environment
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24919 | https://github.com/apache/airflow/pull/24943 | b7f51b9156b780ebf4ca57b9f10b820043f61651 | fd6f537eab7430cb10ea057194bfc9519ff0bb38 | 2022-07-08T11:07:00Z | python | 2022-07-18T18:22:03Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,844 | ["airflow/www/static/js/api/useGridData.test.js", "airflow/www/static/js/api/useGridData.ts"] | grid_data api keep refreshing when backfill DAG runs | ### Apache Airflow version
2.3.2 (latest released)
### What happened

### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
186-Ubuntu
### Versions of Apache Airflow Providers
2.3.2
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24844 | https://github.com/apache/airflow/pull/25042 | 38d6c28f9cf9ee4f663d068032830911f7a8e3a3 | de6938e173773d88bd741e43c7b0aa16d8a1a167 | 2022-07-05T12:09:40Z | python | 2022-07-20T10:30:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,820 | ["airflow/models/dag.py", "tests/models/test_dag.py"] | Dag disappears when DAG tag is longer than 100 char limit | ### Apache Airflow version
2.2.5
### What happened
We added new DAG tags to a couple of our DAGs. In the case when the tag was longer than the 100 character limit the DAG was not showing in the UI and wasn't scheduled. It was however possible to reach it by typing in the URL to the DAG.
Usually when DAGs are broken there will be an error message in the UI, but this problem did not render any error message.
This problem occurred to one of our templated DAGs. Only one DAG broke and it was the one with a DAG tag which was too long. When we fixed the length, the DAG was scheduled and was visible in the UI again.
### What you think should happen instead
Exclude the dag if it is over the 100 character limit or show an error message in the UI.
### How to reproduce
Add a DAG tag which is longer than 100 characters.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
Running Airflow in Kubernetes.
Syncing DAGs from S3 with https://tech.scribd.com/blog/2020/breaking-up-the-dag-repo.html
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24820 | https://github.com/apache/airflow/pull/25196 | a5cbcb56774d09b67c68f87187f2f48d6e70e5f0 | 4b28635b2085a07047c398be6cc1ac0252a691f7 | 2022-07-04T07:59:19Z | python | 2022-07-25T13:46:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,783 | ["airflow/operators/python.py", "tests/operators/test_python.py"] | Check if virtualenv is installed fails | ### Apache Airflow version
2.3.0
### What happened
When using a `PythonVirtualenvOperator` it is checked if `virtualenv` is installed by
`if not shutil.which("virtualenv"):`
https://github.com/apache/airflow/blob/a1679be85aa49c0d6a7ba2c31acb519a5bcdf594/airflow/operators/python.py#L398
Actually, this expression checks if `virtualenv` is on PATH. If Airflow is installed in a virtual environment itself and `virtualenv` is not installed in the environment the check might pass but `virtualenv` cannot be used as it is not present in the environment.
### What you think should happen instead
It should be checked if `virtualenv` is actually available in the environment.
```python
if importlib.util.find_spec("virtualenv") is None:
raise AirflowException('PythonVirtualenvOperator requires virtualenv, please install it.')
```
https://stackoverflow.com/a/14050282
### How to reproduce
_No response_
### Operating System
Ubuntu 20.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24783 | https://github.com/apache/airflow/pull/32939 | 16e0830a5dfe42b9ab0bbca7f8023bf050bbced0 | ddcd474a5e2ce4568cca646eb1f5bce32b4ba0ed | 2022-07-01T12:24:38Z | python | 2023-07-30T04:57:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,773 | ["airflow/providers/amazon/aws/secrets/secrets_manager.py"] | AWS secret manager: AccessDeniedException is not a valid Exception | ### Apache Airflow version
2.3.1
### What happened
Airflow AWS Secret manager handles `AccesssDeniedException` in [secret_manager.py](https://github.com/apache/airflow/blob/providers-amazon/4.0.0/airflow/providers/amazon/aws/secrets/secrets_manager.py#L272) whereas it's not a valid exception for the client
```
File "/usr/local/lib/python3.9/site-packages/airflow/models/variable.py", line 265, in get_variable_from_secrets
var_val = secrets_backend.get_variable(key=key)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/amazon/aws/secrets/secrets_manager.py", line 238, in get_variable
return self._get_secret(self.variables_prefix, key)
File "/usr/local/lib/python3.9/site-packages/airflow/providers/amazon/aws/secrets/secrets_manager.py", line 275, in _get_secret
except self.client.exceptions.AccessDeniedException:
File "/home/astro/.local/lib/python3.9/site-packages/botocore/errorfactory.py", line 51, in __getattr__
raise AttributeError(
AttributeError: <botocore.errorfactory.SecretsManagerExceptions object at 0x7f19cd3c09a0> object has no attribute 'AccessDeniedException'. Valid exceptions are: DecryptionFailure, EncryptionFailure, InternalServiceError, InvalidNextTokenException, InvalidParameterException, InvalidRequestException, LimitExceededException, MalformedPolicyDocumentException, PreconditionNotMetException, PublicPolicyException, ResourceExistsException, ResourceNotFoundException
```
### What you think should happen instead
Handle exception specific to [get_secret_value](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/secretsmanager.html#SecretsManager.Client.get_secret_value)
### How to reproduce
This happened during a unique case where the 100s of secrets are loaded at once. I'm assuming the request is hanging over 30s
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.4.0
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24773 | https://github.com/apache/airflow/pull/24898 | f69e597dfcbb6fa7e7f1a3ff2b5013638567abc3 | 60c2a3bf82b4fe923b8006f6694f74823af87537 | 2022-07-01T05:15:40Z | python | 2022-07-08T14:21:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,755 | ["airflow/utils/serve_logs.py", "newsfragments/24755.improvement.rst"] | Log server on celery worker does not work in IPv6-only setup | ### Apache Airflow version
2.2.5
### What happened
I deployed the Airflow helm chart in a Kubernetes cluster that only allows IPv6 traffic.
When I want to look at a task log in the UI there is this message:
```
*** Fetching from: http://airflow-v1-worker-0.airflow-v1-worker.airflow.svc.cluster.local:8793/log/my-dag/my-task/2022-06-28T00:00:00+00:00/1.log
*** Failed to fetch log file from worker. [Errno 111] Connection refused
```
So the webserver cannot fetch the logfile from the worker.
This happens in my opinion because the gunicorn application listens to `0.0.0.0` (IPv4), see [code](https://github.com/apache/airflow/blob/main/airflow/utils/serve_logs.py#L142) or worker log below, and the inter-pod communication in my cluster is IPv6.
```
~ » k logs airflow-v1-worker-0 -c airflow-worker -p
[2022-06-30 14:51:52 +0000] [49] [INFO] Starting gunicorn 20.1.0
[2022-06-30 14:51:52 +0000] [49] [INFO] Listening at: http://0.0.0.0:8793 (49)
[2022-06-30 14:51:52 +0000] [49] [INFO] Using worker: sync
[2022-06-30 14:51:52 +0000] [50] [INFO] Booting worker with pid: 50
[2022-06-30 14:51:52 +0000] [51] [INFO] Booting worker with pid: 51
-------------- celery@airflow-v1-worker-0 v5.2.3 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.10.118-x86_64-with-glibc2.28 2022-06-30 14:51:53
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: airflow.executors.celery_executor:0x7f73b8d23d00
- ** ---------- .> transport: redis://:**@airflow-v1-redis-master.airflow.svc.cluster.local:6379/1
- ** ---------- .> results: postgresql://airflow:**@airflow-v1-pgbouncer.airflow.svc.cluster.local:6432/airflow_backend_db
- *** --- * --- .> concurrency: 16 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> default exchange=default(direct) key=default
[tasks]
. airflow.executors.celery_executor.execute_command
```
### What you think should happen instead
The gunicorn webserver should (configurably) listen to IPv6 traffic.
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24755 | https://github.com/apache/airflow/pull/24846 | 7f749b653ce363b1450346b61c7f6c406f72cd66 | 2f29bfefb59b0014ae9e5f641d3f6f46c4341518 | 2022-06-30T14:09:25Z | python | 2022-07-07T20:16:36Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,753 | ["airflow/providers/amazon/aws/operators/glue.py"] | Allow back script_location in Glue to be None *again* | ### Apache Airflow version
2.3.2 (latest released)
### What happened
On this commit someone broke the AWS Glue provider by enforcing the script_location to be a string:
https://github.com/apache/airflow/commit/27b77d37a9b2e63e95a123c31085e580fc82b16c
Then someone realized that (see comment thread [there](https://github.com/apache/airflow/commit/27b77d37a9b2e63e95a123c31085e580fc82b16c#r72466413)) and created a new PR to allow None to be parsed again here: https://github.com/apache/airflow/pull/23357
But the parameters no longer have the `Optional[str]` typing and now the error persists with this traceback:
```Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/providers/amazon/aws/operators/glue.py", line 163, in __init__
super().__init__(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 373, in apply_defaults
raise AirflowException(f"missing keyword argument {missing_args.pop()!r}")
airflow.exceptions.AirflowException: missing keyword argument 'script_location'
```
### What you think should happen instead
Please revert the change and add `Optional[str]` here: https://github.com/apache/airflow/blob/main/airflow/providers/amazon/aws/operators/glue.py#L69
### How to reproduce
Use the class without a script_location
### Operating System
Linux
### Versions of Apache Airflow Providers
Apache airflow 2.3.2
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24753 | https://github.com/apache/airflow/pull/24754 | 1b3905ef6eb5630e8d12975d9e91600ffb832471 | 49925be66483ce942bcd4827df9dbd41c3ef41cf | 2022-06-30T13:37:57Z | python | 2022-07-01T14:02:27Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,748 | ["airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/kubernetes/kube_client.py", "tests/kubernetes/test_client.py"] | Configuring retry policy of the the kubernetes CoreV1Api ApiClient | ### Description
Can we add the option to configure the Retry policy of the kubernetes CoreV1Api? Or set it to default have some more resilient configuration.
Today it appears to retry operations 3 times but with 0 backoff in between each try. Causing temporary network glitches to result in fatal errors.
Following the flow below:
1. `airflow.kubernetes.kube_client.get_kube_client()`
Calls `load_kube_config()` without any configuration set, this assigns a default configuration with `retries=None` to `CoreV1Api.set_default()`
1b. Creates `CoreV1Api()` with `api_client=None`
1c. `ApiClient()` default constructor creates a default configuration object via `Configuration.get_default_copy(), this is the default injected above`
2. On request, through some complicated flow inside `ApiClient` and urllib3, this `configuration.retries` eventually finds its way into urllib `HTTPConnectionPool`, where if unset, it uses `urllib3.util.Retry.DEFAULT`, this has a policy of 3x retries with 0 backoff time in between.
------
Configuring the ApiClient would mean changing the `get_kube_client()` to something roughly resembling:
```
client_config = Configuration()
client_config.retries = Retry(total=3, backoff=LOAD_FROM_CONFIG)
config.load_kube_config(...., client_configuration=client_config)
apiclient = ApiClient(client_config)
return CoreV1Api(apiclient)
```
I don't know myself how fine granularity is best to expose to be configurable from airflow. The retry object has a lot of different options, so do the rest of the kubernetes client Configuration object. Maybe it should be injected from a plugin rather than config-file? Maybe urllib or kubernets library have other ways to set default config?
### Use case/motivation
Our Kubernetes API server had some unknown hickup for 10 seconds, this caused the Airflow kubernetes executor to crash, restarting airflow and then it started killing pods that were running fine, showing following log: "Reset the following 1 orphaned TaskInstances"
If the retries would have had some backoff it would have likely survived this hickup.
See attachment for the full stack trace, it's too long to include inline. Here is the most interesting parts:
```
2022-06-29 21:25:49 Class={kubernetes_executor.py:111} Level=ERROR Unknown error in KubernetesJobWatcher. Failing
...
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe35de0c70>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec040>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
2022-06-29 21:25:49 Class={connectionpool.py:810} Level=WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fbe315ec670>: Failed to establish a new connection: [Errno 111] Connection refused')': /api/v1/namespaces/default/pods/REDACTED
...
2022-06-29 21:25:50 Class={kubernetes_executor.py:813} Level=INFO Shutting down Kubernetes executor
...
2022-06-29 21:26:08 Class={scheduler_job.py:696} Level=INFO Starting the scheduler
...
2022-06-29 21:27:29 Class={scheduler_job.py:1285} Level=INFO Message=Reset the following 1 orphaned TaskInstances:
```
[airflowkubernetsretrycrash.log](https://github.com/apache/airflow/files/9017815/airflowkubernetsretrycrash.log)
From airflow version 2.3.2
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24748 | https://github.com/apache/airflow/pull/29809 | 440bf46ff0b417c80461cf84a68bd99d718e19a9 | dcffbb4aff090e6c7b6dc96a4a68b188424ae174 | 2022-06-30T08:27:01Z | python | 2023-04-14T13:37:42Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,736 | ["airflow/sensors/time_sensor.py", "tests/sensors/test_time_sensor.py"] | TimeSensorAsync breaks if target_time is timezone-aware | ### Apache Airflow version
2.3.2 (latest released)
### What happened
`TimeSensorAsync` fails with the following error if `target_time` is aware:
```
[2022-06-29, 05:09:11 CDT] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/sensors/time_sensor.py", line 60, in execute
trigger=DateTimeTrigger(moment=self.target_datetime),
File "/opt/conda/envs/production/lib/python3.9/site-packages/airflow/triggers/temporal.py", line 42, in __init__
raise ValueError(f"The passed datetime must be using Pendulum's UTC, not {moment.tzinfo!r}")
ValueError: The passed datetime must be using Pendulum's UTC, not Timezone('America/Chicago')
```
### What you think should happen instead
Given the fact that `TimeSensor` correctly handles timezones (#9882), this seems like a bug. `TimeSensorAsync` should be a drop-in replacement for `TimeSensor`, and therefore should have the same timezone behavior.
### How to reproduce
```
#!/usr/bin/env python3
import datetime
from airflow.decorators import dag
from airflow.sensors.time_sensor import TimeSensor, TimeSensorAsync
import pendulum
@dag(
start_date=datetime.datetime(2022, 6, 29),
schedule_interval='@daily',
)
def time_sensor_dag():
naive_time1 = datetime.time( 0, 1)
aware_time1 = datetime.time( 0, 1).replace(tzinfo=pendulum.local_timezone())
naive_time2 = pendulum.time(23, 59)
aware_time2 = pendulum.time(23, 59).replace(tzinfo=pendulum.local_timezone())
TimeSensor(task_id='naive_time1', target_time=naive_time1, mode='reschedule')
TimeSensor(task_id='naive_time2', target_time=naive_time2, mode='reschedule')
TimeSensor(task_id='aware_time1', target_time=aware_time1, mode='reschedule')
TimeSensor(task_id='aware_time2', target_time=aware_time2, mode='reschedule')
TimeSensorAsync(task_id='async_naive_time1', target_time=naive_time1)
TimeSensorAsync(task_id='async_naive_time2', target_time=naive_time2)
TimeSensorAsync(task_id='async_aware_time1', target_time=aware_time1) # fails
TimeSensorAsync(task_id='async_aware_time2', target_time=aware_time2) # fails
dag = time_sensor_dag()
```
This can also happen if the `target_time` is naive and `core.default_timezone = system`.
### Operating System
CentOS Stream 8
### Versions of Apache Airflow Providers
N/A
### Deployment
Other
### Deployment details
Standalone
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24736 | https://github.com/apache/airflow/pull/25221 | f53bd5df2a0b370a14f811b353229ad3e9c66662 | ddaf74df9b1e9a4698d719f81931e822b21b0a95 | 2022-06-29T15:28:16Z | python | 2022-07-22T21:03:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,725 | ["airflow/www/templates/airflow/dag.html"] | Trigger DAG from templated view tab producing bad request | ### Body
Reproduced on main branch.
The bug:
When clicking Trigger DAG from templated view tab it resulted in a BAD REQUEST page however DAG run is created (it also produce the UI alert "it should start any moment now")
To compare trying to trigger DAG from log tab works as expected so the issue seems to be relevant only to to the specific tab.

### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/24725 | https://github.com/apache/airflow/pull/25729 | f24e706ff7a84fd36ea39dc3399346c357d40bd9 | 69663b245a9a67b6f05324ce7b141a1bd9b05e0a | 2022-06-29T07:06:00Z | python | 2022-08-17T13:21:30Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,692 | ["airflow/providers/apache/hive/hooks/hive.py", "tests/providers/apache/hive/hooks/test_hive.py"] | Error for Hive Server2 Connection Document | ### What do you see as an issue?
In this Document https://airflow.apache.org/docs/apache-airflow-providers-apache-hive/stable/connections/hiveserver2.html Describe , In Extra must use the "auth_mechanism " but in the sources Code used "authMechanism".
### Solving the problem
use same words.
### Anything else
None
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24692 | https://github.com/apache/airflow/pull/24713 | 13908c2c914cf08f9d962a4d3b6ae54fbdf1d223 | cef97fccd511c8e5485df24f27b82fa3e46929d7 | 2022-06-28T01:16:20Z | python | 2022-06-29T14:12:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,681 | ["airflow/providers/docker/operators/docker.py", "tests/providers/docker/operators/test_docker.py"] | Docker is not pushing last line over xcom | ### Apache Airflow Provider(s)
docker
### Versions of Apache Airflow Providers
apache-airflow-providers-docker==2.7.0
docker==5.0.3
### Apache Airflow version
2.3.2 (latest released)
### Operating System
20.04.4 LTS (Focal Fossa)
### Deployment
Docker-Compose
### Deployment details
Deployed using docker compose command
### What happened
Below is my dockeroperator code
```
extract_data_from_presto = DockerOperator(
task_id='download_data',
image=IMAGE_NAME,
api_version='auto',
auto_remove=True,
mount_tmp_dir=False,
docker_url='unix://var/run/docker.sock',
network_mode="host",
tty=True,
xcom_all=False,
mounts=MOUNTS,
environment={
"PYTHONPATH": "/opt",
},
command=f"test.py",
retries=3,
dag=dag,
)
```
Last line printed in docker is not getting pushed over xcom. In my case last line in docker is
`[2022-06-27, 08:31:34 UTC] {docker.py:312} INFO - {"day": 20220627, "batch": 1656318682, "source": "all", "os": "ubuntu"}`
However the xcom value returned shown in UI is empty
<img width="1329" alt="image" src="https://user-images.githubusercontent.com/25153155/175916850-8f50c579-9d26-44bc-94ae-6d072701ff0b.png">
### What you think should happen instead
It should have return the `{"day": 20220627, "batch": 1656318682, "source": "all", "os": "ubuntu"}` as output of return_value
### How to reproduce
I am not able to exactly produce it with example but it's failing with my application. So I extended the DockerOperator class in my code & copy pasted the `_run_image_with_mounts` method and added 2 print statements
```
print(f"log lines from attach {log_lines}")
try:
if self.xcom_all:
return [stringify(line).strip() for line in self.cli.logs(**log_parameters)]
else:
lines = [stringify(line).strip() for line in self.cli.logs(**log_parameters, tail=1)]
print(f"lines from logs: {lines}")
```
Value of log_lines comes from this [line](https://github.com/apache/airflow/blob/main/airflow/providers/docker/operators/docker.py#L309)
The output of this is as below. First line is last print in my docker code
```
[2022-06-27, 14:43:26 UTC] {pipeline.py:103} INFO - {"day": 20220627, "batch": 1656340990, "os": "ubuntu", "source": "all"}
[2022-06-27, 14:43:27 UTC] {logging_mixin.py:115} INFO - log lines from attach ['2022-06-27, 14:43:15 UTC - root - read_from_presto - INFO - Processing datetime is 2022-06-27 14:43:10.755685', '2022-06-27, 14:43:15 UTC - pyhive.presto - presto - INFO - SHOW COLUMNS FROM <truncated data as it's too long>, '{"day": 20220627, "batch": 1656340990, "os": "ubuntu", "source": "all"}']
[2022-06-27, 14:43:27 UTC] {logging_mixin.py:115} INFO - lines from logs: ['{', '"', 'd', 'a', 'y', '"', ':', '', '2', '0', '2', '2', '0', '6', '2', '7', ',', '', '"', 'b', 'a', 't', 'c', 'h', '"', ':', '', '1', '6', '5', '6', '3', '4', '0', '9', '9', '0', ',', '', '"', 'o', 's', '"', ':', '', '"', 'u', 'b', 'u', 'n', 't', 'u', '"', ',', '', '"', 's', 'o', 'u', 'r', 'c', 'e', '"', ':', '', '"', 'a', 'l', 'l', '"', '}', '', '']
```
From above you can see for some unknown reason `self.cli.logs(**log_parameters, tail=1)` returns array of characters. This changes was brough as part of [change](https://github.com/apache/airflow/commit/2f4a3d4d4008a95fc36971802c514fef68e8a5d4) Before that it was returning the data from log_lines
My suggestion to modify the code as below
```
if self.xcom_all:
return [stringify(line).strip() for line in log_lines]
else:
lines = [stringify(line).strip() for line in log_lines]
return lines[-1] if lines else None
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24681 | https://github.com/apache/airflow/pull/24726 | 6fd06fa8c274b39e4ed716f8d347229e017ba8e5 | cc6a44bdc396a305fd53c7236427c578e9d4d0b7 | 2022-06-27T14:59:41Z | python | 2022-07-05T10:43:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,678 | ["airflow/templates.py"] | Macro prev_execution_date is always empty | ### Apache Airflow version
2.3.2 (latest released)
### What happened
The variable `prev_execution_date` is empty on the first run meaning, all usage will automatically trigger a None error.
### What you think should happen instead
A default date should be provided instead, either the DAG's `start_date` or a default `datetime.min` as during the first run, it will always trigger an error effectively preventing the DAG from running and hence, always returning an error.
### How to reproduce
Pass the variables/macros to any Task:
```
{
"execution_datetime": '{{ ts_nodash }}',
"prev_execution_datetime": '{{ prev_start_date_success | ts_nodash }}' #.strftime("%Y%m%dT%H%M%S")
}
```
Whilst the logical execution date (`execution_datetime`) works, the previous succesful logical execution date `prev_execution_datetime` automatically blows up when applying the `ts_nodash` filter. This effectively makes it impossible to use said macro ever, as it will always fail.
### Operating System
Ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24678 | https://github.com/apache/airflow/pull/25593 | 1594d7706378303409590c57ab1b17910e5d09e8 | 741c20770230c83a95f74fe7ad7cc9f95329f2cc | 2022-06-27T12:59:53Z | python | 2022-08-09T10:34:40Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,653 | ["airflow/operators/trigger_dagrun.py"] | Mapped TriggerDagRunOperator causes SerializationError due to operator_extra_links 'property' object is not iterable | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Hi, I have a kind of issue with launching several subDags via mapping TriggerDagRunOperator (mapping over `conf` parameter). Here is the demo example of my typical DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=XComArg(t1))
t1 >> t2
```
But when Airflow tries to import such DAG it throws the following SerializationError (which I observed both in UI and in $AIRFLOW_HOME/logs/scheduler/latest/<my_dag_name>.py.log):
```
Broken DAG: [/home/aliona/airflow/dags/triggerer_dag.py] Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 638, in _serialize_node
serialize_op['_operator_extra_links'] = cls._serialize_operator_extra_links(
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 933, in _serialize_operator_extra_links
for operator_extra_link in operator_extra_links:
TypeError: 'property' object is not iterable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1106, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/home/aliona/airflow/venv/lib/python3.10/site-packages/airflow/serialization/serialized_objects.py", line 1014, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'triggerer': 'property' object is not iterable
```
How it appears in the UI:

### What you think should happen instead
I think that TriggerDagRunOperator mapped over `conf` parameter should serialize and work well by default.
During the debugging process and trying to make everything work I found out that simple non-mapped TriggerDagRunOperator has value `['Triggered DAG']` in `operator_extra_links` field, so, it is Lisr. But as for mapped TriggerDagRunOperator, it is 'property'. I don't have any idea why during the serialization process Airflow cannot get value of this property, but I tried to reinitialize this field with `['Triggered DAG']` value and finally I fixed this issue in a such way.
For now, for every case of using mapped TriggerDagRunOperator I also use such code at the end of my dag file:
```python
# here 'second' is the name of corresponding mapped TriggerDagRunOperator task (see demo code above)
t2_patch = dag.task_dict['second']
t2_patch.operator_extra_links=['Triggered DAG']
dag.task_dict.update({'second': t2_patch})
```
So, for every mapped TriggerDagRunOperator task I manually change value of operator_extra_links property to `['Triggered DAG']` and as a result there is no any SerializationError. I have a lot of such cases, and all of them are working good with this fix, all subDags are launched, mapped configuration is passed correctly. Also I can wait for end of their execution or not, all this options also work correctly.
### How to reproduce
1. Create dag with mapped TriggerDagRunOperator tasks (main parameters such as task_id, trigger_dag_id and others are in `partial section`, in `expand` section use conf parameter with non-empty iterable value), as, for example:
```python
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'x': 1}])
```
2. Try to serialize dag, and error will appear.
The full example of failing dag file:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.trigger_dagrun import TriggerDagRunOperator
from airflow import XComArg
from datetime import datetime
with DAG(
'triggerer',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : list(map(lambda i: {"x": i}, list(range(10)))),
)
t2 = TriggerDagRunOperator.partial(
task_id='second',
trigger_dag_id='mydag'
).expand(conf=[{'a': 1}])
t1 >> t2
# uncomment these lines to fix an error
# t2_patch = dag.task_dict['second']
# t2_patch.operator_extra_links=['Triggered DAG']
# dag.task_dict.update({'second': t2_patch})
```
As subDag ('mydag') I use these DAG:
```python
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from datetime import datetime
with DAG(
'mydag',
schedule_interval=None,
catchup=False,
start_date=datetime(2019, 12, 2)
) as dag:
t1 = PythonOperator(
task_id='first',
python_callable=lambda : print("first"),
)
t2 = PythonOperator(
task_id='second',
python_callable=lambda : print("second"),
)
t1 >> t2
```
### Operating System
Ubuntu 22.04 LTS
### Versions of Apache Airflow Providers
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-sqlite==2.1.3
### Deployment
Virtualenv installation
### Deployment details
Python 3.10.4
pip 22.0.2
### Anything else
Currently for demonstration purposes I am using fully local Airflow installation: single node, SequentialExecutor and SQLite database backend. But such issue also appeared for multi-node installation with either CeleryExecutor or LocalExecutor and PostgreSQL database in the backend.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24653 | https://github.com/apache/airflow/pull/24676 | 48ceda22bdbee50b2d6ca24767164ce485f3c319 | 8dcafdfcdddc77fdfd2401757dcbc15bfec76d6b | 2022-06-25T14:13:29Z | python | 2022-06-28T02:59:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,618 | ["airflow/providers/oracle/hooks/oracle.py", "airflow/utils/db.py", "docs/apache-airflow-providers-oracle/connections/oracle.rst", "tests/providers/oracle/hooks/test_oracle.py"] | Failed to retrieve data from Oracle database with UTF-8 charset | ### Apache Airflow Provider(s)
oracle
### Versions of Apache Airflow Providers
apache-airflow-providers-oracle==3.1.0
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Linux 4.19.79-1.el7.x86_64
### Deployment
Docker-Compose
### Deployment details
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0
Python: 3.8
Oracle database charset: UTF-8 (returned by `SELECT value FROM nls_database_parameters WHERE parameter = 'NLS_NCHAR_CHARACTERSET'`)
Oracle's client environment:
- LC_CTYPE=C.UTF-8
- NLS_LANG=AMERICAN_AMERICA.CL8MSWIN1251
- LC_ALL=C.UTF-8
### What happened
Any query to Oracle database with UTF8 charset failed with error:
> oracledb.exceptions.NotSupportedError: DPY-3012: national character set id 871 is not supported by python-oracledb in thin mode
### What you think should happen instead
Definetelly, it should work, as it was in previous Oracle provider version (3.0.0).
Quick search shows that `python-oracledb` package, which replaces `cx_Oracle` in 3.1.0, uses **thin** driver mode by default, and it seems that UTF-8 codepage is not supported in that mode ( [see this issue](https://stackoverflow.com/questions/72465536/python-oracledb-new-cx-oracle-connection-generating-notsupportederror-dpy-3012) ). In order to get to thick mode, a call to `oracledb.init_oracle_client()` is required before any connection made ( [see here](https://python-oracledb.readthedocs.io/en/latest/api_manual/module.html#oracledb.init_oracle_client) ).
Indeed, if I add this call to `airflow/providers/oracle/hooks/oracle.py`, everything works fine. Resulting code looks like this:
```
import math
import warnings
from datetime import datetime
from typing import Dict, List, Optional, Union
import oracledb
oracledb.init_oracle_client()
...
```
Downgrade to version 3.0.0 also helps, but I suppose it should be some permanent solution, like adding a configuration parameter or so.
### How to reproduce
- Setup an Oracle database with UTF8 charset
- Setup an Airflow connection with `oracle` type
- Create an operator which issues a `SELECT` statement against the database
### Anything else
Task execution log as follows:
> [2022-06-23, 17:35:36 MSK] {task_command.py:370} INFO - Running <TaskInstance: nip-stage-load2.load-dict.load-sa_user scheduled__2022-06-22T00:00:00+00:00 [running]> on host dwh_develop_scheduler
> [2022-06-23, 17:35:37 MSK] {taskinstance.py:1569} INFO - Exporting the following env vars:
> AIRFLOW_CTX_DAG_EMAIL=airflow@example.com
> AIRFLOW_CTX_DAG_OWNER=airflow
> AIRFLOW_CTX_DAG_ID=nip-stage-load2
> AIRFLOW_CTX_TASK_ID=load-dict.load-sa_user
> AIRFLOW_CTX_EXECUTION_DATE=2022-06-22T00:00:00+00:00
> AIRFLOW_CTX_TRY_NUMBER=1
> AIRFLOW_CTX_DAG_RUN_ID=scheduled__2022-06-22T00:00:00+00:00
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'nip_standby' for task execution.
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'stage' for task execution.
> [2022-06-23, 17:35:37 MSK] {data_transfer.py:198} INFO - Executing:
> SELECT * FROM GMP.SA_USER
> [2022-06-23, 17:35:37 MSK] {base.py:68} INFO - Using connection ID 'nip_standby' for task execution.
> [2022-06-23, 17:35:37 MSK] {taskinstance.py:1889} ERROR - Task failed with exception
> Traceback (most recent call last):
> File "/home/airflow/.local/lib/python3.8/site-packages/dwh_etl/operators/data_transfer.py", line 265, in execute
> if not self.no_check and self.compare_datasets(self.object_name, src, dest):
> File "/home/airflow/.local/lib/python3.8/site-packages/dwh_etl/operators/data_transfer.py", line 199, in compare_datasets
> src_df = src.get_pandas_df(sql)
> File "/home/airflow/.local/lib/python3.8/site-packages/airflow/hooks/dbapi.py", line 128, in get_pandas_df
> with closing(self.get_conn()) as conn:
> File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/oracle/hooks/oracle.py", line 149, in get_conn
> conn = oracledb.connect(**conn_config)
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/connection.py", line 1000, in connect
> return conn_class(dsn=dsn, pool=pool, params=params, **kwargs)
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/connection.py", line 128, in __init__
> impl.connect(params_impl)
> File "src/oracledb/impl/thin/connection.pyx", line 345, in oracledb.thin_impl.ThinConnImpl.connect
> File "src/oracledb/impl/thin/connection.pyx", line 163, in oracledb.thin_impl.ThinConnImpl._connect_with_params
> File "src/oracledb/impl/thin/connection.pyx", line 129, in oracledb.thin_impl.ThinConnImpl._connect_with_description
> File "src/oracledb/impl/thin/connection.pyx", line 250, in oracledb.thin_impl.ThinConnImpl._connect_with_address
> File "src/oracledb/impl/thin/protocol.pyx", line 197, in oracledb.thin_impl.Protocol._connect_phase_two
> File "src/oracledb/impl/thin/protocol.pyx", line 263, in oracledb.thin_impl.Protocol._process_message
> File "src/oracledb/impl/thin/protocol.pyx", line 242, in oracledb.thin_impl.Protocol._process_message
> File "src/oracledb/impl/thin/messages.pyx", line 280, in oracledb.thin_impl.Message.process
> File "src/oracledb/impl/thin/messages.pyx", line 2094, in oracledb.thin_impl.ProtocolMessage._process_message
> File "/home/airflow/.local/lib/python3.8/site-packages/oracledb/errors.py", line 103, in _raise_err
> raise exc_type(_Error(message)) from cause
> oracledb.exceptions.NotSupportedError: DPY-3012: national character set id 871 is not supported by python-oracledb in thin mode
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24618 | https://github.com/apache/airflow/pull/26576 | ee21c1bac4cb5bb1c19ea9e5e84ee9b5854ab039 | b254a9f4bead4e5d4f74c633446da38550f8e0a1 | 2022-06-23T14:49:31Z | python | 2022-09-28T06:14:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,597 | ["tests/system/providers/databricks/example_databricks_sql.py"] | Error in docstring in the example DAG for Databricks example_databricks_sql.py | ### What do you see as an issue?
The docstring in example_databricks_sql.py code doesn't explain the code in that file. It seems to be copied from example_databricks.py file.
### Solving the problem
By putting the right docstring for the file example_databricks_sql.py
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24597 | https://github.com/apache/airflow/pull/26157 | 6045f7ad697e2bdb934add1a8aeae5a817306b22 | 5948d7fd100841cc623caeba438e97b640c2df90 | 2022-06-22T11:22:28Z | python | 2022-09-19T10:53:05Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,574 | ["airflow/providers/airbyte/hooks/airbyte.py", "airflow/providers/airbyte/operators/airbyte.py", "tests/providers/airbyte/hooks/test_airbyte.py"] | `AirbyteHook` add cancel job option | ### Apache Airflow Provider(s)
airbyte
### Versions of Apache Airflow Providers
I want to cancel the job if it running more than specific time . Task is getting timeout however, airbyte job was not cancelled. it seems, on kill feature has not implemented
Workaround:
Create a custom operator and implement cancel hook and call it in on kill function.
def on_kill(self):
if (self.job_id):
self.log.error('on_kill: stopping airbyte Job %s',self.job_id)
self.hook.cancel_job(self.job_id)
### Apache Airflow version
2.0.2
### Operating System
Linux
### Deployment
MWAA
### Deployment details
Airflow 2.0.2
### What happened
airbyte job was not cancelled upon timeout
### What you think should happen instead
it should cancel the job
### How to reproduce
Make sure job runs more than timeout
sync_source_destination = AirbyteTriggerSyncOperator(
task_id=f'airbyte_{key}',
airbyte_conn_id='airbyte_con',
connection_id=key,
asynchronous=False,
execution_timeout=timedelta(minutes=2)
)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24574 | https://github.com/apache/airflow/pull/24593 | 45b11d4ed1412c00ebf32a03ab5ea3a06274f208 | c118b2836f7211a0c3762cff8634b7b9a0d1cf0b | 2022-06-21T03:16:53Z | python | 2022-06-29T06:43:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,572 | ["docs/apache-airflow-providers-snowflake/connections/snowflake.rst"] | Snowflake Provider connection documentation is misleading | ### What do you see as an issue?
Relevant page: https://airflow.apache.org/docs/apache-airflow-providers-snowflake/stable/connections/snowflake.html
## Behavior in the Airflow package
The `SnowflakeHook` object in Airflow behaves oddly compared to some other database hooks like Postgres (so extra clarity in the documentation is beneficial).
Most notably, the `SnowflakeHook` does _not_ make use of the either the `host` or `port` of the `Connection` object it consumes. It is completely pointless to specify these two fields.
When constructing the URL in a runtime context, `snowflake.sqlalchemy.URL` is used for parsing. `URL()` allows for either `account` or `host` to be specified as kwargs. Either one of these 2 kwargs will correspond with what we'd conventionally call the host in a typical URL's anatomy. However, because `SnowflakeHook` never parses `host`, any `host` defined in the Connection object would never get this far into the parsing.
## Issue with the documentation
Right now the documentation does not make clear that it is completely pointless to specify the `host`. The documentation correctly omits the port, but says that the host is optional. It does not warn the user about this field never being consumed at all by the `SnowflakeHook` ([source here](https://github.com/apache/airflow/blob/main/airflow/providers/snowflake/hooks/snowflake.py)).
This can lead to some confusion especially because the Snowflake URI consumed by `SQLAlchemy` (which many people using Snowflake will be familiar with) uses either the "account" or "host" as its host. So a user coming from SQLAlchemy may think it is fine to post the account as the "host" and skip filling in the "account" inside the extras (after all, it's "extra"), whereas that doesn't work.
I would argue that if it is correct to omit the `port` in the documentation (which it is), then `host` should also be excluded.
Furthermore, the documentation reinforces this confusion with the last few lines, where an environment variable example connection is defined that uses a host.
Finally, the documentation says "When specifying the connection in environment variable you should specify it using URI syntax", which is no longer true as of 2.3.0.
### Solving the problem
I have 3 proposals for how the documentation should be updated to better reflect how the `SnowflakeHook` actually works.
1. The `Host` option should not be listed as part of the "Configuring the Connection" section.
2. The example URI should remove the host. The new example URI would look like this: `snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse`. This URI with a blank host works fine; you can test this yourself:
```python
from airflow.models.connection import Connection
c = Connection(conn_id="foo", uri="snowflake://user:password@/db-schema?account=account&database=snow-db®ion=us-east&warehouse=snow-warehouse")
print(c.host)
print(c.extra_dejson)
```
3. An example should be provided of a valid Snowflake construction using the JSON. This example would not only work on its own merits of defining an environment variable connection valid for 2.3.0, but it also would highlight some of the idiosyncrasies of how Airflow defines connections to Snowflake. This would also be valuable as a reference for the AWS `SecretsManagerBackend` for when `full_url_mode` is set to `False`.
### Anything else
I wasn't sure whether to label this issue as a provider issue or documentation issue; I saw templates for either but not both.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24572 | https://github.com/apache/airflow/pull/24573 | 02d8f96bfbc43e780db0220dd7647af0c0f46093 | 2fb93f88b120777330b6ed13b24fa07df279c41e | 2022-06-21T01:41:15Z | python | 2022-06-27T21:58:10Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,566 | ["airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py"] | Migration changes column to NOT NULL without updating NULL data first | ### Apache Airflow version
2.3.2 (latest released)
### What happened
During upgrade from Airflow 1.x, I've encountered migration failure in migration https://github.com/apache/airflow/blob/05c542dfa8eee9b4cdca4e9370f459ce807354b2/airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py
In PR #20962 on these lines https://github.com/apache/airflow/pull/20962/files#diff-9e46226bab06a05ef0040d1f8cc08c81ba94455ca9a170a0417352466242f2c1L61-L63 the update was removed, which breaks if the original table contains nulls in that column (at least in postgres DB).
### What you think should happen instead
_No response_
### How to reproduce
- Have pre 2.0.2 version deployed, where the column was nullable.
- Have task instance with `pool_slots = NULL`
- Try to migrate to latest version (or any version after #20962 was merged)
### Operating System
Custom NixOS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
We have NixOS with Airflow installed using setup.py with postgres as a DB.
### Anything else
```
INFO [alembic.runtime.migration] Running upgrade 449b4072c2da -> 8646922c8a04, Change default ``pool_slots`` to ``1``
Traceback (most recent call last):
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.NotNullViolation: column "pool_slots" contains null values
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/bin/.airflow-wrapped", line 9, in <module>
sys.exit(main())
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/cli/commands/db_command.py", line 35, in initdb
db.initdb()
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/db.py", line 648, in initdb
upgradedb(session=session)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/command.py", line 320, in upgrade
script.run_env()
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/script/base.py", line 563, in run_env
util.load_python_file(self.dir, "env.py")
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 92, in load_python_file
module = load_module_py(module_id, path)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 108, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/runtime/environment.py", line 851, in run_migrations
self.get_context().run_migrations(**kw)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/runtime/migration.py", line 620, in run_migrations
step.migration_fn(**kw)
File "/nix/store/[redacted-hash2]-python3.9-apache-airflow-2.3.2/lib/python3.9/site-packages/airflow/migrations/versions/0080_2_0_2_change_default_pool_slots_to_1.py", line 41, in upgrade
batch_op.alter_column("pool_slots", existing_type=sa.Integer, nullable=False, server_default='1')
File "/nix/store/lb7982cwd56am6nzx1ix0aljz416w6mw-python3-3.9.6/lib/python3.9/contextlib.py", line 124, in __exit__
next(self.gen)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/operations/base.py", line 374, in batch_alter_table
impl.flush()
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/operations/batch.py", line 108, in flush
fn(*arg, **kw)
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/postgresql.py", line 170, in alter_column
super(PostgresqlImpl, self).alter_column(
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/impl.py", line 227, in alter_column
self._exec(
File "/nix/store/[redacted-hash3]-python3.9-alembic-1.7.7/lib/python3.9/site-packages/alembic/ddl/impl.py", line 193, in _exec
return conn.execute(construct, multiparams)
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/nix/store/[redacted-hash1]-python3.9-SQLAlchemy-1.4.9/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.errors.NotNullViolation) column "pool_slots" contains null values
[SQL: ALTER TABLE task_instance ALTER COLUMN pool_slots SET NOT NULL]
(Background on this error at: http://sqlalche.me/e/14/gkpj)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24566 | https://github.com/apache/airflow/pull/24585 | 75db755f4b06b4cfdd3eb2651dbf88ddba2d831f | 9f58e823329d525c0e2b3950ada7e0e047ee7cfd | 2022-06-20T17:57:34Z | python | 2022-06-29T01:55:41Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,526 | ["docs/apache-airflow/installation/upgrading.rst", "docs/spelling_wordlist.txt"] | upgrading from 2.2.3 or 2.2.5 to 2.3.2 fails on migration-job | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Upgrade Airflow 2.2.3 or 2.2.5 -> 2.3.2 fails on migration-job.
**first time upgrade execution:**
```
Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
```
**after the first failed execution (should be due to the first failed execution):**
```
Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
```
### What you think should happen instead
The migration-job shouldn't fail ;)
### How to reproduce
Everytime in my environment just need to create a snapshot from last working DB-Snapshot (Airflow Version 2.2.3)
and then deploy Airflow 2.3.2.
I can update in between to 2.2.5 but ran into the same issue by update to 2.3.2.
### Operating System
Debian GNU/Linux 10 (buster) - apache/airflow:2.3.2-python3.8 (hub.docker.com)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==2.4.0
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-cncf-kubernetes==2.2.0
apache-airflow-providers-docker==2.3.0
apache-airflow-providers-elasticsearch==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-google==6.2.0
apache-airflow-providers-grpc==2.0.1
apache-airflow-providers-hashicorp==2.1.1
apache-airflow-providers-http==2.0.1
apache-airflow-providers-imap==2.0.1
apache-airflow-providers-microsoft-azure==3.4.0
apache-airflow-providers-mysql==2.1.1
apache-airflow-providers-odbc==2.0.1
apache-airflow-providers-postgres==2.4.0
apache-airflow-providers-redis==2.0.1
apache-airflow-providers-sendgrid==2.0.1
apache-airflow-providers-sftp==2.3.0
apache-airflow-providers-slack==4.1.0
apache-airflow-providers-sqlite==2.0.1
apache-airflow-providers-ssh==2.3.0
apache-airflow-providers-tableau==2.1.4
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
- K8s Rev: v1.21.12-eks-a64ea69
- helm chart version: 1.6.0
- Database: AWS RDS MySQL 8.0.28
### Anything else
Full error Log **first** execution:
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.fsgfsdcfds76.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:19:59,724[] {db.py:920} WARNING - Found 33 duplicates in table task_fail. Will attempt to move them.
[2022-06-17 12:36:18,813[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade be2bfac3da23 -> c381b21cb7e4, Create a ``session`` table to store web session data
INFO [alembic.runtime.migration[] Running upgrade c381b21cb7e4 -> 587bdf053233, Add index for ``dag_id`` column in ``job`` table.
INFO [alembic.runtime.migration[] Running upgrade 587bdf053233 -> 5e3ec427fdd3, Increase length of email and username in ``ab_user`` and ``ab_register_user`` table to ``256`` characters
INFO [alembic.runtime.migration[] Running upgrade 5e3ec427fdd3 -> 786e3737b18f, Add ``timetable_description`` column to DagModel for UI.
INFO [alembic.runtime.migration[] Running upgrade 786e3737b18f -> f9da662e7089, Add ``LogTemplate`` table to track changes to config values ``log_filename_template``
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 75, in upgrade
op.create_table(
File "<string>", line 8, in create_table
File "<string>", line 3, in create_table
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/ops.py", line 1254, in create_table
return operations.invoke(op)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 394, in invoke
return fn(self, operation)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/toimpl.py", line 114, in create_table
operations.impl.create_table(table)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 354, in create_table
self._exec(schema.CreateTable(table))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (3780, "Referencing column 'task_id' and referenced column 'task_id' in foreign key constraint 'task_map_task_instance_fkey' are incompatible.")
[SQL:
CREATE TABLE task_map (
dag_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
task_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
run_id VARCHAR(250) COLLATE utf8mb3_bin NOT NULL,
map_index INTEGER NOT NULL,
length INTEGER NOT NULL,
`keys` JSON,
PRIMARY KEY (dag_id, task_id, run_id, map_index),
CONSTRAINT task_map_length_not_negative CHECK (length >= 0),
CONSTRAINT task_map_task_instance_fkey FOREIGN KEY(dag_id, task_id, run_id, map_index) REFERENCES task_instance (dag_id, task_id, run_id, map_index) ON DELETE CASCADE
)
]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
Full error Log **after** first execution (should caused by first execution):
```
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:529: DeprecationWarning: The auth_backend option in [api[] has been renamed to auth_backends - the old setting has been used, but please update your config.
option = self._get_option_from_config_file(deprecated_key, deprecated_section, key, kwargs, section)
/home/airflow/.local/lib/python3.8/site-packages/airflow/configuration.py:356: FutureWarning: The auth_backends setting in [api[] has had airflow.api.auth.backend.session added in the running config, which is needed by the UI. Please update your config before Apache Airflow 3.0.
warnings.warn(
DB: mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
Performing upgrade with database mysql+mysqldb://airflow:***@test-airflow2-db-blue.cndbtlpttl69.eu-central-1.rds.amazonaws.com:3306/airflow
[2022-06-17 12:41:53,882[] {db.py:1448} INFO - Creating tables
INFO [alembic.runtime.migration[] Context impl MySQLImpl.
INFO [alembic.runtime.migration[] Will assume non-transactional DDL.
INFO [alembic.runtime.migration[] Running upgrade f9da662e7089 -> e655c0453f75, Add ``map_index`` column to TaskInstance to identify task-mapping,
and a ``task_map`` table to track mapping values from XCom.
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
MySQLdb._exceptions.OperationalError: (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/db_command.py", line 82, in upgradedb
db.upgradedb(to_revision=to_revision, from_revision=from_revision, show_sql_only=args.show_sql_only)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/db.py", line 1449, in upgradedb
command.upgrade(config, revision=to_revision or 'heads')
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/command.py", line 322, in upgrade
script.run_env()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/script/base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 107, in <module>
run_migrations_online()
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/env.py", line 101, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/runtime/migration.py", line 623, in run_migrations
step.migration_fn(**kw)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/migrations/versions/0100_2_3_0_add_taskmap_and_map_id_on_taskinstance.py", line 49, in upgrade
batch_op.drop_index("idx_task_reschedule_dag_task_run")
File "/usr/local/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/base.py", line 376, in batch_alter_table
impl.flush()
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/operations/batch.py", line 111, in flush
fn(*arg, **kw)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/mysql.py", line 155, in drop_constraint
super(MySQLImpl, self).drop_constraint(const)
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 338, in drop_constraint
self._exec(schema.DropConstraint(const))
File "/home/airflow/.local/lib/python3.8/site-packages/alembic/ddl/impl.py", line 195, in _exec
return conn.execute(construct, multiparams)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1200, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/sql/ddl.py", line 77, in _execute_on_connection
return connection._execute_ddl(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1290, in _execute_ddl
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/home/airflow/.local/lib/python3.8/site-packages/MySQLdb/connections.py", line 254, in query
_mysql.connection.query(self, query)
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1091, "Can't DROP 'task_reschedule_ti_fkey'; check that column/key exists")
[SQL: ALTER TABLE task_reschedule DROP FOREIGN KEY task_reschedule_ti_fkey[]
(Background on this error at: http://sqlalche.me/e/14/e3q8)
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24526 | https://github.com/apache/airflow/pull/25938 | 994f18872af8d2977d78e6d1a27314efbeedb886 | e2592628cb0a6a37efbacc64064dbeb239e83a50 | 2022-06-17T13:59:27Z | python | 2022-08-25T14:15:28Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,525 | ["airflow/models/baseoperator.py", "tests/models/test_baseoperator.py"] | mini-scheduler raises AttributeError: 'NoneType' object has no attribute 'keys' | ### Apache Airflow version
2.3.2 (latest released)
### What happened
The mini-scheduler run after a task finishes sometimes fails with an error "AttributeError: 'NoneType' object has no attribute 'keys'"; see full traceback below.
### What you think should happen instead
_No response_
### How to reproduce
The minimal reproducing example I could find is this:
```python
import pendulum
from airflow.models import BaseOperator
from airflow.utils.task_group import TaskGroup
from airflow.decorators import task
from airflow import DAG
@task
def task0():
pass
class Op0(BaseOperator):
template_fields = ["some_input"]
def __init__(self, some_input, **kwargs):
super().__init__(**kwargs)
self.some_input = some_input
if __name__ == "__main__":
with DAG("dag0", start_date=pendulum.now()) as dag:
with TaskGroup(group_id="tg1"):
Op0(task_id="task1", some_input=task0())
dag.partial_subset("tg1.task1")
```
Running this script with airflow 2.3.2 produces this traceback:
```
Traceback (most recent call last):
File "/app/airflow-bug-minimal.py", line 22, in <module>
dag.partial_subset("tg1.task1")
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2013, in partial_subset
dag.task_dict = {
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2014, in <dictcomp>
t.task_id: _deepcopy_task(t)
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 2011, in _deepcopy_task
return copy.deepcopy(t, memo)
File "/usr/local/lib/python3.10/copy.py", line 153, in deepcopy
y = copier(memo)
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1156, in __deepcopy__
setattr(result, k, copy.deepcopy(v, memo))
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1000, in __setattr__
self.set_xcomargs_dependencies()
File "/venv/lib/python3.10/site-packages/airflow/models/baseoperator.py", line 1107, in set_xcomargs_dependencies
XComArg.apply_upstream_relationship(self, arg)
File "/venv/lib/python3.10/site-packages/airflow/models/xcom_arg.py", line 186, in apply_upstream_relationship
op.set_upstream(ref.operator)
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 241, in set_upstream
self._set_relatives(task_or_task_list, upstream=True, edge_modifier=edge_modifier)
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 185, in _set_relatives
dags: Set["DAG"] = {task.dag for task in [*self.roots, *task_list] if task.has_dag() and task.dag}
File "/venv/lib/python3.10/site-packages/airflow/models/taskmixin.py", line 185, in <setcomp>
dags: Set["DAG"] = {task.dag for task in [*self.roots, *task_list] if task.has_dag() and task.dag}
File "/venv/lib/python3.10/site-packages/airflow/models/dag.py", line 508, in __hash__
val = tuple(self.task_dict.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
Note that the call to `dag.partial_subset` usually happens in the mini-scheduler: https://github.com/apache/airflow/blob/2.3.2/airflow/jobs/local_task_job.py#L253
### Operating System
Linux (Debian 9)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24525 | https://github.com/apache/airflow/pull/24865 | 17564a40a7b8b5dee878cc634077e0a2e63e36fb | c23b31cd786760da8a8e39ecbcf2c0d31e50e594 | 2022-06-17T13:08:16Z | python | 2022-07-06T10:34:48Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,487 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic mapping over KubernetesPodOperator results produces triplicate child tasks | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Attempting to use [dynamic task mapping](https://airflow.apache.org/docs/apache-airflow/2.3.0/concepts/dynamic-task-mapping.html#mapping-over-result-of-classic-operators) on the results of a `KubernetesPodOperator` (or `GKEStartPodOperator`) produces 3x as many downstream task instances as it should. Two-thirds of the downstream tasks fail more or less instantly.
### What you think should happen instead
The problem is that the number of downstream tasks is calculated by counting XCOMs associated with the upstream task, assuming that each `task_id` has a single XCOM:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/models/mappedoperator.py#L606-L615
However the `KubernetesPodOperator` pushes two XCOMs in its `.execute()` method:
https://github.com/apache/airflow/blob/fe5e689adfe3b2f9bcc37d3975ae1aea9b55e28a/airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py#L425-L426
So the number of downstream tasks ends up being 3x what it should.
### How to reproduce
Reproducing the behavior requires access to a kubernetes cluster, but in psedo-code, a dag like this should demonstrate the behavior:
```
with DAG(...) as dag:
# produces an output list with N elements
first_pod = GKEStartPodOperator(..., do_xcom_push=True)
# produces 1 output per input, so N task instances are created each with a single output
second_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(first_pod))
# should have N task instances created, but actually gets 3N task instances created
third_pod = GKEStartPodOperator.partial(..., do_xcom_push=True).expand(id=XComArg(second_pod))
```
### Operating System
macOS 12.4
### Versions of Apache Airflow Providers
apache-airflow-providers-cncf-kubernetes==4.1.0
apache-airflow-providers-google==8.0.0
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
When I edit `mappedoperator.py` in my local deployment to filter on the XCOM key things behave as expected:
```
# Collect lengths from mapped upstreams.
xcom_query = (
session.query(XCom.task_id, func.count(XCom.map_index))
.group_by(XCom.task_id)
.filter(
XCom.dag_id == self.dag_id,
XCom.run_id == run_id,
XCom.key == 'return_value', <------- added this line
XCom.task_id.in_(mapped_dep_keys),
XCom.map_index >= 0,
)
)
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24487 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | 2022-06-15T23:31:31Z | python | 2022-07-27T08:36:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,484 | ["airflow/migrations/versions/0111_2_3_3_add_indexes_for_cascade_deletes.py", "airflow/models/taskfail.py", "airflow/models/taskreschedule.py", "airflow/models/xcom.py", "docs/apache-airflow/migrations-ref.rst"] | `airflow db clean task_instance` takes a long time | ### Apache Airflow version
2.3.1
### What happened
When I ran the `airflow db clean task_instance` command, it can take up to 9 hours to complete. The database around 3215220 rows in the `task_instance` table and 51602 rows in the `dag_run` table. The overall size of the database is around 1 TB.
I believe the issue is because of the cascade constraints on others tables as well as the lack of indexes on task_instance foreign keys.
Running delete on a small number of rows gives this shows most of the time is spent in xcom and task_fail tables
```
explain (analyze,buffers,timing) delete from task_instance t1 where t1.run_id = 'manual__2022-05-11T01:09:05.856703+00:00'; rollback;
Trigger for constraint task_reschedule_ti_fkey: time=3.208 calls=23
Trigger for constraint task_map_task_instance_fkey: time=1.848 calls=23
Trigger for constraint xcom_task_instance_fkey: time=4457.779 calls=23
Trigger for constraint rtif_ti_fkey: time=3.135 calls=23
Trigger for constraint task_fail_ti_fkey: time=1164.183 calls=23
```
I temporarily fixed it by adding these indexes.
```
create index idx_task_reschedule_dr_fkey on task_reschedule (dag_id, run_id);
create index idx_xcom_ti_fkey on xcom (dag_id, task_id, run_id, map_index);
create index idx_task_fail_ti_fkey on task_fail (dag_id, task_id, run_id, map_index);
```
### What you think should happen instead
It should not take 9 hours to complete a clean up process. Before upgrading to 2.3.x, it was taking no more than 5 minutes.
### How to reproduce
_No response_
### Operating System
N/A
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24484 | https://github.com/apache/airflow/pull/24488 | 127f8f4de02422ade8f2c84f84d3262d6efde185 | 677c42227c08f705142f298ab88915f133cd94e5 | 2022-06-15T21:21:18Z | python | 2022-06-16T18:41:35Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,460 | ["airflow/providers/google/cloud/hooks/bigquery.py", "airflow/providers/google/cloud/operators/bigquery.py", "airflow/providers/google/cloud/triggers/bigquery.py", "docs/apache-airflow-providers-google/operators/cloud/bigquery.rst", "tests/providers/google/cloud/hooks/test_bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | let BigQueryGetData operator take a query string and as_dict flag | ### Description
Today the BigQueryGetData airflow.providers.google.cloud.operators.bigquery.BigQueryGetDataOperator only allows you to point to a specific dataset and table and how many rows you want.
It already sets up a BigQueryHook so it very easy to implement custom query from a string as well.
It would also be very efficient to have a as_dict flag to return the result as a list of dicts.
I am not an expert in python but here is my atempt at a modification of the current code (from 8.0.0rc2)
``` python
class BigQueryGetDataOperatorX(BaseOperator):
"""
Fetches the data from a BigQuery table (alternatively fetch data for selected columns)
and returns data in a python list. The number of elements in the returned list will
be equal to the number of rows fetched. Each element in the list will again be a list
where element would represent the columns values for that row.
**Example Result**: ``[['Tony', '10'], ['Mike', '20'], ['Steve', '15']]``
.. seealso::
For more information on how to use this operator, take a look at the guide:
:ref:`howto/operator:BigQueryGetDataOperator`
.. note::
If you pass fields to ``selected_fields`` which are in different order than the
order of columns already in
BQ table, the data will still be in the order of BQ table.
For example if the BQ table has 3 columns as
``[A,B,C]`` and you pass 'B,A' in the ``selected_fields``
the data would still be of the form ``'A,B'``.
**Example**: ::
get_data = BigQueryGetDataOperator(
task_id='get_data_from_bq',
dataset_id='test_dataset',
table_id='Transaction_partitions',
max_results=100,
selected_fields='DATE',
gcp_conn_id='airflow-conn-id'
)
:param dataset_id: The dataset ID of the requested table. (templated)
:param table_id: The table ID of the requested table. (templated)
:param max_results: The maximum number of records (rows) to be fetched
from the table. (templated)
:param selected_fields: List of fields to return (comma-separated). If
unspecified, all fields are returned.
:param gcp_conn_id: (Optional) The connection ID used to connect to Google Cloud.
:param delegate_to: The account to impersonate using domain-wide delegation of authority,
if any. For this to work, the service account making the request must have
domain-wide delegation enabled.
:param location: The location used for the operation.
:param impersonation_chain: Optional service account to impersonate using short-term
credentials, or chained list of accounts required to get the access_token
of the last account in the list, which will be impersonated in the request.
If set as a string, the account must grant the originating account
the Service Account Token Creator IAM role.
If set as a sequence, the identities from the list must grant
Service Account Token Creator IAM role to the directly preceding identity, with first
account from the list granting this role to the originating account (templated).
:param query: (Optional) A sql query to execute instead
:param as_dict: if True returns the result as a list of dictionaries. default to False
"""
template_fields: Sequence[str] = (
'dataset_id',
'table_id',
'max_results',
'selected_fields',
'impersonation_chain',
)
ui_color = BigQueryUIColors.QUERY.value
def __init__(
self,
*,
dataset_id: Optional[str] = None,
table_id: Optional[str] = None,
max_results: Optional[int] = 100,
selected_fields: Optional[str] = None,
gcp_conn_id: str = 'google_cloud_default',
delegate_to: Optional[str] = None,
location: Optional[str] = None,
impersonation_chain: Optional[Union[str, Sequence[str]]] = None,
query: Optional[str] = None,
as_dict: bool = False,
**kwargs,
) -> None:
super().__init__(**kwargs)
self.dataset_id = dataset_id
self.table_id = table_id
self.max_results = int(max_results)
self.selected_fields = selected_fields
self.gcp_conn_id = gcp_conn_id
self.delegate_to = delegate_to
self.location = location
self.impersonation_chain = impersonation_chain
self.query = query
self.as_dict = as_dict
if not query and not table_id:
self.log.error('Table_id or query not set. Please provide either a dataset_id + table_id or a query string')
def execute(self, context: 'Context') -> list:
self.log.info(
'Fetching Data from %s.%s max results: %s', self.dataset_id, self.table_id, self.max_results
)
hook = BigQueryHook(
gcp_conn_id=self.gcp_conn_id,
delegate_to=self.delegate_to,
impersonation_chain=self.impersonation_chain,
location=self.location,
)
if not self.query:
if not self.selected_fields:
schema: Dict[str, list] = hook.get_schema(
dataset_id=self.dataset_id,
table_id=self.table_id,
)
if "fields" in schema:
self.selected_fields = ','.join([field["name"] for field in schema["fields"]])
with hook.list_rows(
dataset_id=self.dataset_id,
table_id=self.table_id,
max_results=self.max_results,
selected_fields=self.selected_fields
) as rows:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.selected_fields, row))).encode('utf-8') for row in rows]
else:
table_data = [row.values() for row in rows]
else:
with hook.get_conn().cursor().execute(self.query) as cursor:
if self.as_dict:
table_data = [json.dumps(dict(zip(self.keys,row))).encode('utf-8') for row in cursor.fetchmany(self.max_results)]
else:
table_data = [row for row in cursor.fetchmany(self.max_results)]
self.log.info('Total extracted rows: %s', len(table_data))
return table_data
```
### Use case/motivation
This would simplify getting data from BigQuery into airflow instead of having to first store the data in a separat table with BigQueryInsertJob and then fetch that.
Also simplifies handling the data with as_dict in the same way that many other database connectors in python does.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24460 | https://github.com/apache/airflow/pull/30887 | dff7e0de362e4cd318d7c285ec102923503eceb3 | b8f73768ec13f8d4cc1605cca3fa93be6caac473 | 2022-06-15T08:33:25Z | python | 2023-05-09T06:05:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,388 | ["airflow/models/abstractoperator.py", "airflow/models/baseoperator.py", "airflow/models/mappedoperator.py", "airflow/models/taskinstance.py", "airflow/utils/context.py", "tests/decorators/test_python.py", "tests/models/test_mappedoperator.py"] | Unable to access operator attrs within Jinja context for mapped tasks | ### Apache Airflow version
2.3.2 (latest released)
### What happened
When attempting to generate mapped SQL tasks using a Jinja-templated query that access operator attributes, an exception like the following is thrown:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute '<operator attribute>'`
For example, when attempting to map `SQLValueCheckOperator` tasks with respect to `database` using a query of `SELECT COUNT(*) FROM {{ task.database }}.tbl;`:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'database'`
Or, when using `SnowflakeOperator` and mapping via `parameters` of a query like `SELECT * FROM {{ task.parameters.tbl }};`:
`jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'parameters'`
### What you think should happen instead
When using Jinja-template SQL queries, the attribute that is being using for the mapping should be accessible via `{{ task.<operator attribute> }}`. Executing the same SQL query with classic, non-mapped tasks allows for this operator attr access from the `task` context object.
Ideally, the same interface should apply for both non-mapped and mapped tasks. Also with the preference of using `parameters` over `params` in SQL-type operators, having the ability to map over `parameters` will help folks move from using `params` to `parameters`.
### How to reproduce
Consider the following DAG:
```python
from pendulum import datetime
from airflow.decorators import dag
from airflow.operators.sql import SQLValueCheckOperator
from airflow.providers.snowflake.operators.snowflake import SnowflakeOperator
CORE_SQL = "SELECT COUNT(*) FROM {{ task.database }}.tbl;"
SNOWFLAKE_SQL = """SELECT * FROM {{ task.parameters.tbl }};"""
@dag(dag_id="map-city", start_date=datetime(2022, 6, 7), schedule_interval=None)
def map_city():
classic_sql_value_check = SQLValueCheckOperator(
task_id="classic_sql_value_check",
conn_id="snowflake",
sql=CORE_SQL,
database="dev",
pass_value=20000,
)
mapped_value_check = SQLValueCheckOperator.partial(
task_id="check_row_count",
conn_id="snowflake",
sql=CORE_SQL,
pass_value=20000,
).expand(database=["dev", "production"])
classic_snowflake_task = SnowflakeOperator(
task_id="classic_snowflake_task",
snowflake_conn_id="snowflake",
sql=SNOWFLAKE_SQL,
parameters={"tbl": "foo"},
)
mapped_snowflake_task = SnowflakeOperator.partial(
task_id="mapped_snowflake_task", snowflake_conn_id="snowflake", sql=SNOWFLAKE_SQL
).expand(
parameters=[
{"tbl": "foo"},
{"tbl": "bar"},
]
)
_ = map_city()
```
**`SQLValueCheckOperator` tasks**
The logs for the "classic_sql_value_check", non-mapped task show the query executing as expected:
`[2022-06-11, 02:01:03 UTC] {sql.py:204} INFO - Executing SQL check: SELECT COUNT(*) FROM dev.tbl;`
while the mapped "check_row_count" task fails with the following exception:
```bash
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'map-city', 'check_row_count', 'manual__2022-06-11T02:01:01.831761+00:00', '--job-id', '350', '--raw', '--subdir', 'DAGS_FOLDER/map_city.py', '--cfg-path', '/tmp/tmpm5bg9mt5', '--map-index', '0', '--error-file', '/tmp/tmp2kbilt2l']
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:80} INFO - Job 350: Subtask check_row_count
[2022-06-11, 02:01:03 UTC] {task_command.py:370} INFO - Running <TaskInstance: map-city.check_row_count manual__2022-06-11T02:01:01.831761+00:00 map_index=0 [running]> on host 569596df5be5
[2022-06-11, 02:01:03 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1451, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1555, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2212, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 726, in render_template_fields
self._do_render_template_fields(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 344, in _do_render_template_fields
rendered_content = self.render_template(
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 391, in render_template
return render_template_to_string(template, context)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 296, in render_template_to_string
return render_template(template, context, native=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 291, in render_template
return "".join(nodes)
File "<template>", line 13, in root
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 903, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'database'
```
**`SnowflakeOperator` tasks**
Similarly, the "classic_snowflake_task" non-mapped task is able to execute the SQL query as expected:
`[2022-06-11, 02:01:04 UTC] {snowflake.py:324} INFO - Running statement: SELECT * FROM foo;, parameters: {'tbl': 'foo'}`
while the mapped "mapped_snowflake_task task fails to execute the query:
```bash
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:79} INFO - Running: ['airflow', 'tasks', 'run', 'map-city', 'mapped_snowflake_task', 'manual__2022-06-11T02:01:01.831761+00:00', '--job-id', '347', '--raw', '--subdir', 'DAGS_FOLDER/map_city.py', '--cfg-path', '/tmp/tmp6kmqs5ew', '--map-index', '0', '--error-file', '/tmp/tmpkufg9xqx']
[2022-06-11, 02:01:03 UTC] {standard_task_runner.py:80} INFO - Job 347: Subtask mapped_snowflake_task
[2022-06-11, 02:01:03 UTC] {task_command.py:370} INFO - Running <TaskInstance: map-city.mapped_snowflake_task manual__2022-06-11T02:01:01.831761+00:00 map_index=0 [running]> on host 569596df5be5
[2022-06-11, 02:01:03 UTC] {taskinstance.py:1889} ERROR - Task failed with exception
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1451, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 1555, in _execute_task_with_callbacks
task_orig = self.render_templates(context=context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/taskinstance.py", line 2212, in render_templates
rendered_task = self.task.render_template_fields(context)
File "/usr/local/lib/python3.9/site-packages/airflow/models/mappedoperator.py", line 726, in render_template_fields
self._do_render_template_fields(
File "/usr/local/lib/python3.9/site-packages/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 344, in _do_render_template_fields
rendered_content = self.render_template(
File "/usr/local/lib/python3.9/site-packages/airflow/models/abstractoperator.py", line 391, in render_template
return render_template_to_string(template, context)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 296, in render_template_to_string
return render_template(template, context, native=False)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/helpers.py", line 291, in render_template
return "".join(nodes)
File "<template>", line 13, in root
File "/usr/local/lib/python3.9/site-packages/jinja2/sandbox.py", line 326, in getattr
value = getattr(obj, attribute)
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 910, in __getattr__
return self._fail_with_undefined_error()
File "/usr/local/lib/python3.9/site-packages/jinja2/runtime.py", line 903, in _fail_with_undefined_error
raise self._undefined_exception(self._undefined_message)
jinja2.exceptions.UndefinedError: 'airflow.models.mappedoperator.MappedOperator object' has no attribute 'parameters'
```
### Operating System
Debian GNU/Linux 10 (buster)
### Versions of Apache Airflow Providers
apache-airflow-providers-snowflake==2.7.0
### Deployment
Astronomer
### Deployment details
Astronomer Runtime 5.0.3
### Anything else
Even though using the `{{ task.<operator attr> }}` method does not work for mapped tasks, there is a workaround. Given the `SnowflakeOperator` example from above attempting to execute the query: `SELECT * FROM {{ task.parameters.tbl }};`, users can modify the templated query to `SELECT * FROM {{ task.mapped_kwargs.parameters[ti.map_index].tbl }};` for successful execution. This workaround isn't very obvious though and requires from solid digging into the new 2.3.0 code.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24388 | https://github.com/apache/airflow/pull/26702 | ed494594ef213b3633aa3972e1b8b4ad18b88e42 | 5560a46bfe8a14205c5e8a14f0b5c2ae74ee100c | 2022-06-11T02:28:05Z | python | 2022-09-27T12:52:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,360 | ["airflow/providers/snowflake/transfers/s3_to_snowflake.py", "airflow/providers/snowflake/utils/__init__.py", "airflow/providers/snowflake/utils/common.py", "docs/apache-airflow-providers-snowflake/operators/s3_to_snowflake.rst", "tests/providers/snowflake/transfers/test_s3_to_snowflake.py", "tests/providers/snowflake/utils/__init__.py", "tests/providers/snowflake/utils/test_common.py", "tests/system/providers/snowflake/example_snowflake.py"] | Pattern parameter in S3ToSnowflakeOperator | ### Description
I would like to propose to add a pattern parameter to allow loading only those files that satisfy the given regex pattern.
This function is supported on the Snowflake side, it just requires passing a parameter to the COPY INTO command.
[Snowflake documentation/](https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html#loading-using-pattern-matching)
### Use case/motivation
I have multiple files with different schema in one folder. I would like to move to Snowflake only files which meet the given name filter, and I am not able to do it with the prefix parameter.
### Related issues
I am not aware
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24360 | https://github.com/apache/airflow/pull/24571 | 5877f45d65d5aa864941efebd2040661b6f89cb1 | 66e84001df069c76ba8bfefe15956c4018844b92 | 2022-06-09T22:13:38Z | python | 2022-06-22T07:49:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,352 | ["airflow/providers/google/cloud/operators/gcs.py", "tests/providers/google/cloud/operators/test_gcs.py"] | GCSDeleteObjectsOperator raises unexpected ValueError for prefix set as empty string | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
All versions.
```
apache-airflow-providers-google>=1.0.0b1
apache-airflow-backport-providers-google>=2020.5.20rc1
```
### Apache Airflow version
2.3.2 (latest released)
### Operating System
macOS 12.3.1
### Deployment
Composer
### Deployment details
_No response_
### What happened
I'm currently doing the upgrade check in Airflow 1.10.15 and one of the topics is to change the import locations from contrib to the specific provider.
While replacing:
`airflow.contrib.operators.gcs_delete_operator.GoogleCloudStorageDeleteOperator`
By:
`airflow.providers.google.cloud.operators.gcs.GCSDeleteObjectsOperator`
An error appeared in the UI: `Broken DAG: [...] Either object or prefix should be set. Both are None`
---
Upon further investigation, I found out that while the `GoogleCloudStorageDeleteOperator` from contrib module had this parameter check (as can be seen [here](https://github.com/apache/airflow/blob/v1-10-stable/airflow/contrib/operators/gcs_delete_operator.py#L63)):
```python
assert objects is not None or prefix is not None
```
The new `GCSDeleteObjectsOperator` from Google provider module have the following (as can be seen [here](https://github.com/apache/airflow/blob/main/airflow/providers/google/cloud/operators/gcs.py#L308-L309)):
```python
if not objects and not prefix:
raise ValueError("Either object or prefix should be set. Both are None")
```
---
As it turns out, these conditions are not equivalent, because a variable `prefix` containing the value of an empty string won't raise an error on the first case, but will raise it in the second one.
### What you think should happen instead
This behavior does not match with the documentation description, since using a prefix as an empty string is perfectly valid in case the user wants to delete all objects within the bucket.
Furthermore, there were no philosophical changes within the API in that timeframe. This code change happened in [this commit](https://github.com/apache/airflow/commit/25e9047a4a4da5fad4f85c366e3a6262c0a4f68e#diff-c45d838a139b258ab703c23c30fd69078108f14a267731bd2be5cc1c8a7c02f5), where the developer's intent was clearly to remove assertions, not to change the logic behind the validation. In fact, it even relates to a PR for [this Airflow JIRA ticket](https://issues.apache.org/jira/browse/AIRFLOW-6193).
### How to reproduce
Add a `GCSDeleteObjectsOperator` with a parameter `prefix=""` to a DAG.
Example:
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.providers.google.cloud.operators.gcs import GCSDeleteObjectsOperator
with DAG('test_dag', schedule_interval=timedelta(days=1), start_date=datetime(2022, 1, 1)) as dag:
task = GCSDeleteObjectsOperator(
task_id='task_that_generates_ValueError',
bucket_name='some_bucket',
prefix=''
)
```
### Anything else
In my opinion, the error message wasn't very accurate as well, since it just breaks the DAG without pointing out which task is causing the issue. It took me 20 minutes to pinpoint the exact task in my case, since I was dealing with a DAG with a lot of tasks.
Adding the `task_id` to the error message could improve the developer experience in that case.
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24352 | https://github.com/apache/airflow/pull/24353 | dd35fdaf35b6e46fd69a1b1da36ae7ffc0505dcb | e7a1c50d62680a521ef90a424b7eff03635081d5 | 2022-06-09T17:23:11Z | python | 2022-06-19T22:07:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,346 | ["airflow/utils/db.py"] | Add salesforce_default to List Connection | ### Apache Airflow version
2.1.2
### What happened
salesforce_default is not in the list of Connection.
### What you think should happen instead
Should be added salesforce_default to ListConnection.
### How to reproduce
After resetting DB, look at List Connection.
### Operating System
GCP Container
### Versions of Apache Airflow Providers
_No response_
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24346 | https://github.com/apache/airflow/pull/24347 | e452949610cff67c0e0a9918a8fefa7e8cc4b8c8 | 6d69dd062f079a8fbf72563fd218017208bfe6c1 | 2022-06-09T14:56:06Z | python | 2022-06-13T18:25:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,343 | ["airflow/providers/google/cloud/operators/bigquery.py"] | BigQueryCreateEmptyTableOperator do not deprecated bigquery_conn_id yet | ### Apache Airflow version
2.3.2 (latest released)
### What happened
`bigquery_conn_id` is deprecated for other operators like `BigQueryDeleteTableOperator`
and replaced by `gcp_conn_id` but it's not the case for `BigQueryCreateEmptyTableOperator`
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24343 | https://github.com/apache/airflow/pull/24376 | dd78e29a8c858769c9c21752f319e19af7f64377 | 8e0bddaea69db4d175f03fa99951f6d82acee84d | 2022-06-09T09:19:46Z | python | 2022-06-12T21:07:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,338 | ["airflow/exceptions.py", "airflow/models/xcom_arg.py", "tests/decorators/test_python.py"] | TaskFlow AirflowSkipException causes downstream step to fail | ### Apache Airflow version
2.3.2 (latest released)
### What happened
Using TaskFlow API and have 2 tasks that lead to the same downstream task. These tasks check for new data and when found will set an XCom entry of the new filename for the downstream to handle. If no data is found the upstream tasks raise a skip exception.
The downstream task has the trigger_rule = none_failed_min_one_success.
Problem is that a task which is set to Skip doesn't set any XCom. When the downstream task starts it raises the error:
`airflow.exceptions.AirflowException: XComArg result from task2 at airflow_2_3_xcomarg_render_error with key="return_value" is not found!`
### What you think should happen instead
Based on trigger rule of "none_failed_min_one_success", expectation is that an upstream task should be allowed to skip and the downstream task will still run. While the downstream does try to start based on trigger rules, it never really gets to run since the error is raised when rendering the arguments.
### How to reproduce
Example dag will generate the error if run.
```
from airflow.decorators import dag, task
from airflow.exceptions import AirflowSkipException
@task
def task1():
return "example.csv"
@task
def task2():
raise AirflowSkipException()
@task(trigger_rule="none_failed_min_one_success")
def downstream_task(t1, t2):
print("task ran")
@dag(
default_args={"owner": "Airflow", "start_date": "2022-06-07"},
schedule_interval=None,
)
def airflow_2_3_xcomarg_render_error():
t1 = task1()
t2 = task2()
downstream_task(t1, t2)
example_dag = airflow_2_3_xcomarg_render_error()
```
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24338 | https://github.com/apache/airflow/pull/25661 | c7215a28f9df71c63408f758ed34253a4dfaa318 | a4e38978194ef46565bc1e5ba53ecc65308d09aa | 2022-06-08T20:07:42Z | python | 2022-08-16T12:05:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,331 | ["dev/example_dags/README.md", "dev/example_dags/update_example_dags_paths.py"] | "Example DAGs" link under kubernetes-provider documentation is broken. Getting 404 | ### What do you see as an issue?
_Example DAGs_ folder is not available for _apache-airflow-providers-cncf-kubernetes_ , which results in broken link on documentation page ( https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/index.html ).
Getting 404 error when clicked on _Example DAGs_ link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/example_dags )
<img width="1464" alt="Screenshot 2022-06-08 at 9 01 56 PM" src="https://user-images.githubusercontent.com/11991059/172657376-8a556e9e-72e5-4aab-9c71-b1da239dbf5c.png">
<img width="1475" alt="Screenshot 2022-06-08 at 9 01 39 PM" src="https://user-images.githubusercontent.com/11991059/172657413-c72d14f2-071f-4452-baf7-0f41504a5a3a.png">
### Solving the problem
Folder named _example_dags_ should be created under link (https://github.com/apache/airflow/tree/main/airflow/providers/cncf/kubernetes/) which should include kubernetes specific DAG examples.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24331 | https://github.com/apache/airflow/pull/24348 | 74ac9f788c31512b1fcd9254282905f34cc40666 | 85c247ae10da5ee93f26352d369f794ff4f2e47c | 2022-06-08T15:33:29Z | python | 2022-06-09T17:33:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,328 | ["airflow/models/taskinstance.py", "tests/models/test_taskinstance.py"] | `TI.log_url` is incorrect with mapped tasks | ### Apache Airflow version
2.3.0
### What happened
I had an `on_failure_callback` that sent a `task_instance.log_url` to slack, it no longer behaves correctly - giving me a page with no logs rendered instead of the logs for my task.
(Example of failure, URL like: https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag )

### What you think should happen instead
The correct behavior would be the URL:
https://XYZ.astronomer.run/dhp2pmdd/log?execution_date=2022-06-05T00%3A00%3A00%2B00%3A00&task_id=create_XXX_zip_files_and_upload&dag_id=my_dag&map_index=0
as exemplified:

### How to reproduce
_No response_
### Operating System
Debian/Docker
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24328 | https://github.com/apache/airflow/pull/24335 | a9c350762db4dca7ab5f6c0bfa0c4537d697b54c | 48a6155bb1478245c1dd8b6401e4cce00e129422 | 2022-06-08T14:44:49Z | python | 2022-06-14T20:15:49Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,321 | ["airflow/providers/amazon/aws/sensors/s3.py", "tests/providers/amazon/aws/sensors/test_s3_key.py"] | S3KeySensor wildcard_match only matching key prefixes instead of full patterns | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
3.4.0
### Apache Airflow version
2.3.2 (latest released)
### Operating System
Debian GNU/Linux 10
### Deployment
Docker-Compose
### Deployment details
_No response_
### What happened
For patterns like "*.zip" the S3KeySensor succeeds for all files, does not take full pattern into account i.e. the ".zip" part).
Bug introduced in https://github.com/apache/airflow/pull/22737
### What you think should happen instead
Full pattern match as in version 3.3.0 (in S3KeySensor poke()):
```
...
if self.wildcard_match:
return self.get_hook().check_for_wildcard_key(self.bucket_key, self.bucket_name)
...
```
alternatively the files obtained by `files = self.get_hook().get_file_metadata(prefix, bucket_name)` which only match the prefix should be further filtered.
### How to reproduce
create a DAG with a key sensor task containing wildcard and suffix, e.g. the following task should succeed only if any ZIP-files are available in "my-bucket", but succeeds for all instead:
`S3KeySensor(task_id="wait_for_file", bucket_name="my-bucket", bucket_key="*.zip", wildcard_match=True)`
### Anything else
Not directly part of this issue but at the same time I would suggest to include additional file attributes in method _check_key, e.g. the actual key of the files. This way more filters, e.g. exclude specific keys, could be implemented by using the check_fn.
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24321 | https://github.com/apache/airflow/pull/24378 | f8e106a531d2dc502bdfe47c3f460462ab0a156d | 7fed7f31c3a895c0df08228541f955efb16fbf79 | 2022-06-08T11:44:58Z | python | 2022-06-11T19:31:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,318 | ["airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/emr.py"] | `EmrCreateJobFlowOperator` does not work if emr_conn_id param contain credential | ### Apache Airflow Provider(s)
amazon
### Versions of Apache Airflow Providers
_No response_
### Apache Airflow version
2.3.2 (latest released)
### Operating System
os
### Deployment
Other
### Deployment details
_No response_
### What happened
EmrCreateJobFlowOperator currently have two params for connection `emr_conn_id` and `aws_conn_id`. So it works only when I set `aws_conn_id` containing credentials and an empty `emr_conn_id` and it does not work in the below case
- when I set both aws_conn_id and emr_conn_id in the operator and both connection contains credentials i.e it has aws_access_key_id and other params in airflow connection extra
```
Unknown parameter in input: "aws_access_key_id", must be one of: Name, LogUri, LogEncryptionKmsKeyId, AdditionalInfo, AmiVersion, ReleaseLabel, Instances, Steps, BootstrapActions, SupportedProducts, NewSupportedProducts, Applications, Configurations, VisibleToAllUsers, JobFlowRole, ServiceRole, Tags, SecurityConfiguration, AutoScalingRole, ScaleDownBehavior, CustomAmiId, EbsRootVolumeSize, RepoUpgradeOnBoot, KerberosAttributes, StepConcurrencyLevel, ManagedScalingPolicy, PlacementGroupConfigs, AutoTerminationPolicy, OSReleaseLabel
```
- when I set both aws_conn_id and emr_conn_id in the operator and only emr_conn_id connection contains credentials i.e it has aws_access_key_id and other params in airflow connection extra
```
[2022-06-07, 20:49:19 UTC] {taskinstance.py:1826} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 87, in create_job_flow
response = self.get_conn().run_job_flow(**job_flow_overrides)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 508, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 895, in _make_api_call
operation_model, request_dict, request_context
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 917, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 195, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 134, in create_request
operation_name=operation_model.name,
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 103, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 187, in sign
auth.add_auth(request)
File "/usr/local/lib/python3.7/site-packages/botocore/auth.py", line 405, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
```
- When I set only aws_conn_id in the operator and it contains credentials
```
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 90, in create_job_flow
emr_conn = self.get_connection(self.emr_conn_id)
File "/opt/airflow/airflow/hooks/base.py", line 67, in get_connection
conn = Connection.get_connection_from_secrets(conn_id)
File "/opt/airflow/airflow/models/connection.py", line 430, in get_connection_from_secrets
raise AirflowNotFoundException(f"The conn_id `{conn_id}` isn't defined")
```
- When I set only emr_conn_id in the operator and it contains credentials
```
[2022-06-07, 20:49:19 UTC] {taskinstance.py:1826} ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/airflow/airflow/providers/amazon/aws/operators/emr.py", line 324, in execute
response = emr.create_job_flow(job_flow_overrides)
File "/opt/airflow/airflow/providers/amazon/aws/hooks/emr.py", line 87, in create_job_flow
response = self.get_conn().run_job_flow(**job_flow_overrides)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 508, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 895, in _make_api_call
operation_model, request_dict, request_context
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 917, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 116, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 195, in _send_request
request = self.create_request(request_dict, operation_model)
File "/usr/local/lib/python3.7/site-packages/botocore/endpoint.py", line 134, in create_request
operation_name=operation_model.name,
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 412, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 256, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/hooks.py", line 239, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 103, in handler
return self.sign(operation_name, request)
File "/usr/local/lib/python3.7/site-packages/botocore/signers.py", line 187, in sign
auth.add_auth(request)
File "/usr/local/lib/python3.7/site-packages/botocore/auth.py", line 405, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
```
- When I set aws_conn_id having credential and emr_conn_id having no credential i.e empty extra field in airflow connection then it work
### What you think should happen instead
It should work with even one connection id i.e with aws_conn_id or emr_conn_id and should not fail even emr_conn_id has credentials
### How to reproduce
- Create EmrCreateJobFlowOperator and pass both aws_conn_id and emr_conn_id or
- Create EmrCreateJobFlowOperator and pass aws_conn_id or emr_conn_id
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24318 | https://github.com/apache/airflow/pull/24306 | 4daf51a2c388b41201a0a8095e0a97c27d6704c8 | 99d98336312d188a078721579a3f71060bdde542 | 2022-06-08T09:40:10Z | python | 2022-06-10T13:25:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,281 | ["breeze"] | Fix command in breeze | ### What do you see as an issue?
There is a mistake in the command displayed in breeze.
```
The answer is 'no'. Skipping Installing pipx?.
Please run those commands manually (you might need to restart shell between them):i
pip -m install pipx
pipx ensurepath
pipx install -e '/Users/ishiis/github/airflow/dev/breeze/'
breeze setup-autocomplete --force
After that, both pipx and breeze should be available on your path
```
There is no -m option in pip.
```bash
% pip -m install pipx
Usage:
pip <command> [options]
no such option: -m
```
### Solving the problem
Fix the command.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24281 | https://github.com/apache/airflow/pull/24282 | 6dc474fc82aa9325081b0c5f2b92c948e2f16f74 | 69ca427754c54c5496bf90b7fc70fdd646bc92e5 | 2022-06-07T11:10:12Z | python | 2022-06-07T11:13:16Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,197 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | KubernetesPodOperator rendered template tab does not pretty print `env_vars` | ### Apache Airflow version
2.2.5
### What happened
I am using the `KubernetesPodOperator` for airflow tasks in `Airflow 2.2.5` and it doesnot render the `env_vars` in the `rendered template` in a easily human consumable format as it did in `Airflow 1.10.x`

### What you think should happen instead
The `env_vars` should be pretty printed in human legible form.
### How to reproduce
Create a task with the `KubernetesPodOperator` and check the `Rendered template` tab of the task instance.
### Operating System
Docker
### Versions of Apache Airflow Providers
2.2.5
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24197 | https://github.com/apache/airflow/pull/25850 | 1a087bca3d6ecceab96f9ab818b3b75262222d13 | db5543ef608bdd7aefdb5fefea150955d369ddf4 | 2022-06-04T20:45:01Z | python | 2022-08-22T15:43:43Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,160 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | `BigQueryCreateExternalTableOperator` uses deprecated function | ### Body
The `BigQueryCreateExternalTableOperator` uses `create_external_table`:
https://github.com/apache/airflow/blob/cd49a8b9f64c57b5622025baee9247712c692e72/airflow/providers/google/cloud/operators/bigquery.py#L1131-L1147
this function is deprecated:
https://github.com/apache/airflow/blob/511d0ee256b819690ccf0f6b30d12340b1dd7f0a/airflow/providers/google/cloud/hooks/bigquery.py#L598-L602
**The task:**
Refactor/change the operator to replace `create_external_table` with `create_empty_table`.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/24160 | https://github.com/apache/airflow/pull/24363 | 626d9db2908563c4b7675db5de2cb1e3acde82e9 | c618da444e841afcfd73eeb0bce9c87648c89140 | 2022-06-03T11:29:43Z | python | 2022-07-12T11:17:06Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,103 | ["chart/templates/workers/worker-kedaautoscaler.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_keda.py"] | Add support for KEDA HPA Config to Helm Chart | ### Description
> When managing the scale of a group of replicas using the HorizontalPodAutoscaler, it is possible that the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated. This is sometimes referred to as thrashing, or flapping.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#flapping
Sometimes clusters need to restrict the flapping of Airflow worker replicas.
KEDA supports [`advanced.horizontalPodAutoscalerConfig`](https://keda.sh/docs/1.4/concepts/scaling-deployments/).
It would be great if the users would have the option in the helm chart to configure scale down behavior.
### Use case/motivation
KEDA currently cannot set advanced options.
We want to set advanced options like `scaleDown.stabilizationWindowSeconds`, `scaleDown.policies`.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24103 | https://github.com/apache/airflow/pull/24220 | 97948ecae7fcbb7dfdfb169cfe653bd20a108def | 8639c70f187a7d5b8b4d2f432d2530f6d259eceb | 2022-06-02T10:15:04Z | python | 2022-06-30T17:16:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,077 | ["docs/exts/exampleinclude.py"] | Fix style of example-block | ### What do you see as an issue?
Style of example-block in the document is broken.
<img width="810" alt="example-block" src="https://user-images.githubusercontent.com/12693596/171412272-70ca791b-c798-4080-83ab-e358f290ac31.png">
This problem occurs when browser width is between 1000px and 1280px.
See: https://airflow.apache.org/docs/apache-airflow-providers-http/stable/operators.html
### Solving the problem
The container class should be removed.
```html
<div class="example-block-wrapper docutils container">
^^^^^^^^^
...
</div>
```
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24077 | https://github.com/apache/airflow/pull/24078 | e41b5a012427b5e7eab49de702b83dba4fc2fa13 | 5087f96600f6d7cc852b91079e92d00df6a50486 | 2022-06-01T14:08:48Z | python | 2022-06-01T17:50:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,060 | ["airflow/utils/db.py", "tests/utils/test_db.py"] | `airflow db check-migrations -t 0` fails to check migrations in airflow 2.3 | ### Apache Airflow version
2.3.1 (latest released)
### What happened
As of Airflow 2.3.0 the `airflow db check-migrations -t 0` command will ALWAYS think there are unapplied migrations (even if there are none to apply), whereas, in Airflow 2.2.5, a single check would be run succeessfully.
This was caused by PR https://github.com/apache/airflow/pull/18439, which updated the loop from [`while True`](https://github.com/apache/airflow/blob/2.2.5/airflow/utils/db.py#L638) (which always loops at least once) to [`for ticker in range(timeout)`](https://github.com/apache/airflow/blob/2.3.0/airflow/utils/db.py#L696) (which will NOT loop if timeout=0).
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
All
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24060 | https://github.com/apache/airflow/pull/24068 | 841ed271017ff35a3124f1d1a53a5c74730fed60 | 84d7b5ba39b3ff1fb5b856faec8fd4e731d3f397 | 2022-05-31T23:36:50Z | python | 2022-06-01T11:03:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,037 | [".gitignore", "chart/.gitignore", "chart/Chart.lock", "chart/Chart.yaml", "chart/INSTALL", "chart/NOTICE", "chart/charts/postgresql-10.5.3.tgz", "scripts/ci/libraries/_kind.sh", "tests/charts/conftest.py"] | Frequent failures of helm chart tests | ### Apache Airflow version
main (development)
### What happened
We keep on getting very frequent failures of Helm Chart tests and seems that a big number of those errors are because of errors when pulling charts from bitnami for postgres:
Example here (but I saw it happening very often recently):
https://github.com/apache/airflow/runs/6666449965?check_suite_focus=true#step:9:314
```
Save error occurred: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Deleting newly downloaded charts, restoring pre-update state
Error: could not find : chart postgresql not found in https://charts.bitnami.com/bitnami: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR
Dumping logs from KinD
```
It is not only a problem for our CI but it might be similar problem for our users who want to install the chart - they might also get the same kinds of error.
I guess we should either make it more resilient to intermittent problems with bitnami charts or use another chart (or maybe even host the chart ourselves somewhere within apache infrastructure. While the postgres chart is not really needed for most "production" users, it is still a dependency of our chart and it makes our chart depend on external and apparently flaky service.
### What you think should happen instead
We should find (or host ourselves) more stable dependency or get rid of it.
### How to reproduce
Look at some recent CI builds and see that they often fail in K8S tests and more often than not the reason is missing postgresql chart.
### Operating System
any
### Versions of Apache Airflow Providers
not relevant
### Deployment
Other
### Deployment details
CI
### Anything else
Happy to make the change once we agree what's the best way :).
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24037 | https://github.com/apache/airflow/pull/24395 | 5d5976c08c867b8dbae8301f46e0c422d4dde1ed | 779571b28b4ae1906b4a23c031d30fdc87afb93e | 2022-05-31T08:08:25Z | python | 2022-06-14T16:07:47Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,022 | ["airflow/providers/apache/hdfs/hooks/webhdfs.py", "tests/providers/apache/hdfs/hooks/test_webhdfs.py"] | Make port as optional argument | ### Apache Airflow version
2.3.1 (latest released)
### What happened
my webhdfs is running behind a proxy.
I need to give a complicated URL and do not want to add port in front of the URL
So there are two options
1) use Schema to add stuff before webhdfs/v1
2) make adding of port optional so that complete url before webhdfs can be provided in the hostname
### What you think should happen instead
error while connecting to webhdfs
[2022-05-27, 15:24:43 UTC] {webhdfs.py:89} INFO - Read operation on namenode abc.xyz.com failed with error: b''
### How to reproduce
run webhdfs behind a proxy without exposing the actual servers
### Operating System
Debian
### Versions of Apache Airflow Providers
2.2.3 apache-airflow-providers-apache-hdfs
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24022 | https://github.com/apache/airflow/pull/24550 | ed37c3a0e87f64e6942497c5d4c15078a5e02d16 | d8ec1ec8ecae64dbe8591496e8bec71d5e3ca25a | 2022-05-30T11:37:39Z | python | 2022-06-28T12:44:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 24,015 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "tests/providers/cncf/kubernetes/operators/test_kubernetes_pod.py"] | KubernetesPodOperator/KubernetesExecutor: Failed to adopt pod 422 | ### Apache Airflow version
2.3.0
### What happened
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### What you think should happen instead
On previous versions of airflow e.g. 1.10.x, the orphan-workerPods would be adopted by the 2nd run-time of the airflowMainApp and either used to continue the same DAG and/or cleared away when complete.
This is not happening with the newer airflow 2.1.4 / 2.3.0 (presumably because the code changed), and upon the 2nd run-time of the airflowMainApp - it would seem to try to adopt-workerPod but fails at that point ("Failed to adopt pod" in the logs and hence it cannot clear away orphan pods).
Given this is an edge-case only, (i.e. we would not expect k8s to be recycling the main airflowApp/pod anyway), it doesn't seem totally urgent bug. However, the only reason for me raising this issue with yourselves is that given any k8s->namespace, in particular in PROD, over time (e.g. 1 month?) the namespace will slowly be being filled up with orphanPods and somebody would need to manually log-in to delete old pods.
### How to reproduce
Here i provide steps to reproduce this.
Goal of this: to describe how to reproduce the "Failed to Adopt pod" error condition.
The DAG->step Described Below should be of type KubernetesPodOperator
NOTE: under normal operation,
(where the MAIN_AIRFLOW_POD is never recycled by k8s, we will never see this edge-case)
(it is only when the workerPod is still running, but the MAIN_AIRFLOW_POD is suddenly restarted/stopped)
(that we would see orphan->workerPods)
1] Implement a contrived-DAG, with a single step -> which is long-running (e.g. 6 minutes)
2] Deploy your airflow-2.1.4 / airfow-2.3.0 together with the contrived-DAG
3] Run your contrived-DAG.
4] in the middle of running the single-step, check via "kubectl" that your Kubernetes->workerPod has been created / running
5] while workerPod still running, do "kubectl delete pod <OF_MAIN_AIRFLOW_POD>". This will mean that the workerPod becomes an orphan.
6] the workerPod still continues to run through to completion. after which the K8S->status of the pod will be Completed, however the pod doesn't shut down itself.
7] "kubectl" start up a new <MAIN_AIRFLOW_POD> so the web-ui is running again.
8] MAIN_AIRFLOW_POD->webUi - Run your contrived-DAG again
9] while the contrived-DAG is starting/tryingToStart etc, you will see in the logs printed out "Failed to adopt pod" -> with 422 error code.
The step-9 with the error message, you will find two appearances of this error msg in the airflow-2.1.4, airflow-2.3.0 source-code.
The step-7 may also - general logging from the MAIN_APP - may also output the "Failed to adopt pod" error message also.
### Operating System
kubernetes
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other 3rd-party Helm chart
### Deployment details
nothing special.
it (CI/CD pipeline) builds the app. using requirements.txt to pull-in all the required python dependencies (including there is a dependency for the airflow-2.1.4 / 2.3.0)
it (CI/CD pipeline) packages the app as an ECR image & then deploy directly to k8s namespace.
### Anything else
this is 100% reproducible each & every time.
i have tested this multiple times.
also - i tested this on the old airflow-1.10.x a couple of times to verify that the bug did not exist previously
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/24015 | https://github.com/apache/airflow/pull/29279 | 05fb80ee9373835b2f72fad3e9976cf29aeca23b | d26dc223915c50ff58252a709bb7b33f5417dfce | 2022-05-30T07:49:27Z | python | 2023-02-01T11:50:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,999 | ["airflow/providers/google/cloud/operators/cloud_sql.py"] | Set color of Operators in cloud_sql.py | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
No color is set for the GraphView legends. All CloudSQLOperator legends are white.
<img width="970" alt="legends" src="https://user-images.githubusercontent.com/12693596/170856092-0f9af91a-fbe9-47d9-9a9e-6e572b45fdfc.png">
### What you think should happen instead
A color should be set to the Operators in cloud_sql.py.
### How to reproduce
Add example_cloud_sql.py to Dags and look at GraphView.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23999 | https://github.com/apache/airflow/pull/24000 | 65ad2aed26f7572ba0d3b04a33f9144989ac7117 | 3dd7b1ddbaa3170fbda30a8323286abf075f30ba | 2022-05-29T06:55:59Z | python | 2022-06-01T20:09:19Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,967 | ["airflow/utils/serve_logs.py", "tests/utils/test_serve_logs.py"] | serve_logs.py should respect the logging config's task handler's base_log_folder value | ### Apache Airflow version
2.2.5
### What happened
in a worker container, a flask app is spun up to serve task log files that is read by the webserver and rendered to the user in the UI. The log files cannot be read if you overwrite the task handler's base_log_folder value. (404 error)
ie. in the airflow.cfg, the base_log_folder = `foo/bar/logs`, and the task handler uses `{base_log_folder}/dags`
### What you think should happen instead
in https://github.com/apache/airflow/blob/main/airflow/utils/serve_logs.py#L33, it should read the logging config's task handler log location.
### How to reproduce
use a custom logging config, override the task's base log folder.
Run a dag and try to view logs in the ui, you will get a 404
```
LOGGING_CONFIG["handlers"].update(
{
"task": {
"class": "airflow.utils.log.file_task_handler.FileTaskHandler",
"formatter": "airflow",
"base_log_folder": f"{BASE_LOG_FOLDER}/dags",
"filename_template": FILENAME_TEMPLATE,
"filters": ["mask_secrets"],
},
}
```
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23967 | https://github.com/apache/airflow/pull/32781 | 9e3104b5effc56f77833772d86e7201bcc50b8c9 | 579ce065c5b0865b4aea580ee2d05c89680b7a8b | 2022-05-27T13:45:58Z | python | 2023-07-26T17:46:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,955 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py"] | Add missing parameter documentation for `KubernetesHook` and `KubernetesPodOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Kubernetes provider](https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/_api/airflow/providers/cncf/kubernetes/index.html).
- [ ] KubernetesHook: `in_cluster`, `config_file`, `cluster_context`, `client_configuration`
- [ ] KubernetesPodOperator: `env_from`, `node_selectors`, `pod_runtime_info_envs`, `configmaps`
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23955 | https://github.com/apache/airflow/pull/24054 | 203fe71b49da760968c26752957f765c4649423b | 98b4e48fbc1262f1381e7a4ca6cce31d96e6f5e9 | 2022-05-27T03:23:54Z | python | 2022-06-06T22:20:02Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,954 | ["airflow/providers/databricks/operators/databricks.py", "airflow/providers/databricks/operators/databricks_sql.py", "docs/apache-airflow-providers-databricks/operators/submit_run.rst", "docs/spelling_wordlist.txt"] | Add missing parameter documentation in `DatabricksSubmitRunOperator` and `DatabricksSqlOperator` | ### Body
Currently the following modules are missing certain parameters in their docstrings. Because of this, these parameters are not captured in the [Python API docs for the Databricks provider](https://airflow.apache.org/docs/apache-airflow-providers-databricks/stable/_api/airflow/providers/databricks/index.html).
- [ ] DatabricksSubmitRunOperator: `tasks`
- [ ] DatabricksSqlOperator: `do_xcom_push`
- Granted this is really part of the `BaseOperator`, but this operator specifically sets the default value to False so it would be good if this was explicitly listed for users.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23954 | https://github.com/apache/airflow/pull/24599 | 2e5737df531410d2d678d09b5d2bba5d37a06003 | 82f842ffc56817eb039f1c4f1e2c090e6941c6af | 2022-05-27T03:10:32Z | python | 2022-07-28T15:19:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,949 | ["airflow/www/static/js/dags.js"] | Only autorefresh active dags on home page | ### Description
In https://github.com/apache/airflow/pull/22900, we added auto-refresh for the home page. Right now, we pass all dag_ids to the `last_dagruns`, `dag_stats` and `task_stats` endpoints. During auto-refresh, we should only request info for dags that are not paused.
On page load, we still want to check all three endpoints for all dags in view. But for subsequent auto-refresh requests we should only check active dags.
See [here](https://github.com/apache/airflow/blob/main/airflow/www/static/js/dags.js#L429) for where the homepage auto-refresh lives.
### Use case/motivation
Smaller requests should make a faster home page.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23949 | https://github.com/apache/airflow/pull/24770 | e9d19a60a017224165e835949f623f106b97e1cb | 2a1472a6bef57fc57cfe4577bcbed5ba00521409 | 2022-05-26T21:08:49Z | python | 2022-07-13T14:55:57Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,945 | ["airflow/www/static/js/grid/dagRuns/Bar.jsx", "airflow/www/static/js/grid/dagRuns/Tooltip.jsx", "airflow/www/static/js/grid/details/Header.jsx", "airflow/www/static/js/grid/details/content/dagRun/index.jsx"] | Add backfill icon to grid view dag runs | ### Description
In the grid view, we use a play icon to indicate manually triggered dag runs. We should do the same for a backfilled dag run.
Possible icons can be found [here](https://react-icons.github.io/react-icons).
Note: We use the manual run icon in both the [dag run bar component](https://github.com/apache/airflow/blob/main/airflow/www/static/js/grid/dagRuns/Bar.jsx) and in the [details panel header](https://github.com/apache/airflow/blob/main/airflow/www/static/js/grid/details/Header.jsx)
### Use case/motivation
Help users quickly differentiate between manual, scheduled and backfill runs.
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23945 | https://github.com/apache/airflow/pull/23970 | 6962d8a3556999af2eec459c944417ddd6d2cfb3 | d470a8ef8df152eceee88b95365ff923db7cb2d7 | 2022-05-26T16:40:00Z | python | 2022-05-27T20:25:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,935 | ["airflow/providers/ftp/hooks/ftp.py", "tests/providers/ftp/hooks/test_ftp.py"] | No option to set blocksize when retrieving a file in ftphook | ### Apache Airflow version
2.0.0
### What happened
using ftphook, im trying to download a file in chunks but the deafult blocksize is 8192 and cannot be changed.
retrieve_file code is calling conn.retrbinary(f'RETR {remote_file_name}', callback) but no blocksize is passed while this function is declared:
def retrbinary(self, cmd, callback, blocksize=8192, rest=None):
### What you think should happen instead
allow passing a blocksize
### How to reproduce
_No response_
### Operating System
gcp
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23935 | https://github.com/apache/airflow/pull/24860 | 2f29bfefb59b0014ae9e5f641d3f6f46c4341518 | 64412ee867fe0918cc3b616b3fb0b72dcd88125c | 2022-05-26T12:06:34Z | python | 2022-07-07T20:54:46Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,926 | ["airflow/www/fab_security/manager.py"] | Airflow 2.3.1 - gunicorn keeps removing and adding Permission menu access on Permissions to role Admin | ### Apache Airflow version
2.3.1 (latest released)
### What happened
gunicorn log file is full of these messages.
This behavior was present also in release 2.3.0
I don't think that was a problem in Airflow 2.2.x
The problem is that web server now is much weekier than before and it is the second time in a week that I found it not running.
```
[2022-05-26 05:50:55,083] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:50:55,098] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:50:55,159] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:50:55,164] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-26 05:51:25,688] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:51:25,704] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:51:25,763] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:51:25,769] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-26 05:51:56,283] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:51:56,297] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:51:56,353] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:51:56,358] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-26 05:52:26,954] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:52:26,971] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:52:27,033] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:52:27,040] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-26 05:52:57,595] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:52:57,610] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:52:57,679] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:52:57,684] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
[2022-05-26 05:53:28,130] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 05:53:28,148] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 05:53:28,209] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 05:53:28,214] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
```
### What you think should happen instead
Log must be clear and gunicorn stable as a rock.
### How to reproduce
Just launch gunicorn
### Operating System
Ubuntu 20.04.4 LTS
### Versions of Apache Airflow Providers
```
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-mssql==2.1.3
apache-airflow-providers-microsoft-winrm==2.0.5
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-openfaas==2.0.3
apache-airflow-providers-oracle==2.2.3
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-samba==3.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.4
```
### Deployment
Virtualenv installation
### Deployment details
gunicorn is configured with AUTH_LDAP option
### Anything else
no
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23926 | https://github.com/apache/airflow/pull/24065 | 75fdbf01dd91cc65d8289b1d2f7183b1db08dae1 | be21e08e1bb6202626c12b2375f24167cf22838a | 2022-05-26T04:00:43Z | python | 2022-06-02T18:04:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,917 | ["setup.py"] | Wrong dependecy version of requests for Databricks provider | ### Apache Airflow Provider(s)
databricks
### Versions of Apache Airflow Providers
apache-airflow-providers-databricks==2.7.0
### Apache Airflow version
2.2.2
### Operating System
Debian GNU/Linux 11 (bullseye)
### Deployment
Other
### Deployment details
_No response_
### What happened
There was added import statement from the `requests` library to `airflow/providers/databricks/hooks/databricks_base.py` by [this MR](https://github.com/apache/airflow/pull/22422/files#diff-bfbc446378c91e1c398eb07d02dc333703bb1dda4cbe078193b16199f11db8a5R34).
But minimal version of requests wasn't bumped up to `>=2.27.0`, as we can see[ it's still >=2.26.0](https://github.com/apache/airflow/blob/main/setup.py#L269), but that JSONDecodeError class added to `requests` library starting from [2.27.0 (see release notes)](https://github.com/psf/requests/releases/tag/v2.27.0).
It brings inconsistency between libraries and leads to following error:
```
File "/usr/local/lib/python3.7/site-packages/airflow/providers/databricks/hooks/databricks.py", line 33, in <module>
from airflow.providers.databricks.hooks.databricks_base import BaseDatabricksHook
File "/usr/local/lib/python3.7/site-packages/airflow/providers/databricks/hooks/databricks_base.py", line 34, in <module>
from requests.exceptions import JSONDecodeError
ImportError: cannot import name 'JSONDecodeError' from 'requests.exceptions'
```
### What you think should happen instead
it breaks DAGs
### How to reproduce
Use any Databricks operator with `requests==2.26.0`, which is defined is minimal compatible version
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23917 | https://github.com/apache/airflow/pull/23927 | b170dc7d66a628e405a824bfbc9fb48a3b3edd63 | 80c3fcd097c02511463b2c4f586757af0e5f41b2 | 2022-05-25T18:10:21Z | python | 2022-05-27T01:57:12Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,871 | ["airflow/dag_processing/manager.py", "airflow/utils/process_utils.py", "tests/utils/test_process_utils.py"] | `dag-processor` failed to start in docker | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Standalone DagProcessor which run in Apache Airflow Production Docker Image failed with error
```
airflow-dag-processor_1 |
airflow-dag-processor_1 | Traceback (most recent call last):
airflow-dag-processor_1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-dag-processor_1 | sys.exit(main())
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main
airflow-dag-processor_1 | args.func(args)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command
airflow-dag-processor_1 | return func(*args, **kwargs)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper
airflow-dag-processor_1 | return f(*args, **kwargs)
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/dag_processor_command.py", line 80, in dag_processor
airflow-dag-processor_1 | manager.start()
airflow-dag-processor_1 | File "/home/airflow/.local/lib/python3.7/site-packages/airflow/dag_processing/manager.py", line 475, in start
airflow-dag-processor_1 | os.setpgid(0, 0)
airflow-dag-processor_1 | PermissionError: [Errno 1] Operation not permitted
```
This error not happen if directly run in host system by `airflow dag-processor`
Seems like this issue happen because when we run in Apache Airflow Production Docker Image `airflow` process is session leader, and according to `man setpgid`
```
ERRORS
setpgid() will fail and the process group will not be altered if:
[EPERM] The process indicated by the pid argument is a session leader.
```
### What you think should happen instead
`dag-processor` should start in docker without error
### How to reproduce
1. Use simple docker-compose file which use official Airflow 2.3.0 image
```yaml
# docker-compose-dag-processor.yaml
version: '3'
volumes:
postgres-db-volume:
aiflow-logs-volume:
x-airflow-common:
&airflow-common
image: apache/airflow:2.3.0-python3.7
environment:
AIRFLOW__SCHEDULER__STANDALONE_DAG_PROCESSOR: 'True'
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:insecurepassword@postgres/airflow
volumes:
- aiflow-logs-volume:/opt/airflow/logs
- ${PWD}/dags:/opt/airflow/dags
user: "${AIRFLOW_UID:-50000}:0"
extra_hosts:
- "host.docker.internal:host-gateway"
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: insecurepassword
POSTGRES_DB: airflow
ports:
- 55432:5432
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: unless-stopped
airflow-upgrade-db:
<<: *airflow-common
command: ["db", "upgrade"]
depends_on:
postgres:
condition: service_healthy
airflow-dag-processor:
<<: *airflow-common
command: dag-processor
restart: unless-stopped
depends_on:
airflow-upgrade-db:
condition: service_completed_successfully
```
2. `docker-compose -f docker-compose-dag-processor.yaml up`
### Operating System
macOS Monterey 12.3.1
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker: **20.10.12**
docker-compose: **1.29.2**
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23871 | https://github.com/apache/airflow/pull/23872 | 8ccff9244a6d1a936d8732721373b967e95ec404 | 9216489d9a25f56f7a55d032b0ebfc1bf0bf4a83 | 2022-05-23T15:15:46Z | python | 2022-05-27T14:29:11Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,868 | ["dev/breeze/src/airflow_breeze/commands/testing_commands.py"] | Don’t show traceback on 'breeze tests' subprocess returning non-zero | ### Body
Currently, if any tests fail when `breeze tests` is run, Breeze 2 would emit a traceback pointing to the `docker-compose` subprocess call. This is due to Docker propagating the exit call of the underlying `pytest` subprocess. While it is technically correct to emit an exception, the traceback is useless in this context, and only clutters output. It may be a good idea to add a special case for this and suppress the exception.
A similar situation can be observed with `breeze shell` if you run `exit 1` in the shell.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23868 | https://github.com/apache/airflow/pull/23897 | 1bf6dded9a5dcc22238b8943028b08741e36dfe5 | d788f4b90128533b1ac3a0622a8beb695b52e2c4 | 2022-05-23T14:12:38Z | python | 2022-05-24T20:56:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,867 | ["dev/breeze/src/airflow_breeze/commands/ci_image_commands.py", "dev/breeze/src/airflow_breeze/utils/md5_build_check.py", "images/breeze/output-commands-hash.txt"] | Don’t prompt for 'breeze build-image' | ### Body
Currently, running the (new) `breeze build-image` brings up two prompts if any of the meta files are outdated:
```
$ breeze build-image
Good version of Docker: 20.10.14.
Good version of docker-compose: 2.5.1
The following important files are modified in ./airflow since last time image was built:
* setup.py
* Dockerfile.ci
* scripts/docker/common.sh
* scripts/docker/install_additional_dependencies.sh
* scripts/docker/install_airflow.sh
* scripts/docker/install_airflow_dependencies_from_branch_tip.sh
* scripts/docker/install_from_docker_context_files.sh
Likely CI image needs rebuild
Do you want to build the image (this works best when you have good connection and can take usually from 20 seconds to few minutes depending how old your image is)?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
This might take a lot of time (more than 10 minutes) even if you havea good network connection. We think you should attempt to rebase first.
But if you really, really want - you can attempt it. Are you really sure?
Press y/N/q. Auto-select n in 10 seconds (add `--answer n` to avoid delay next time): y
```
While the prompts are shown in good nature, they don’t really make sense for `build-image` since the user already gave an explicit answer by running `build-image`. They should be suppressed.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23867 | https://github.com/apache/airflow/pull/23898 | cac7ab5c4f4239b04d7800712ee841f0e6f6ab86 | 90940b529340ef7f9b8c51d5c7d9b6a848617dea | 2022-05-23T13:44:37Z | python | 2022-05-24T16:27:25Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,838 | ["airflow/models/mappedoperator.py", "tests/serialization/test_dag_serialization.py"] | AIP-45 breaks follow-on mini scheduler for mapped tasks | I've just noticed that this causes a problem for the follow-on mini scheduler for mapped tasks. I guess that code path wasn't sufficiently unit tested.
DAG
```python
import csv
import io
import os
import json
from datetime import datetime
from airflow import DAG
from airflow.decorators import task
from airflow.models.xcom_arg import XComArg
from airflow.providers.amazon.aws.hooks.s3 import S3Hook
from airflow.providers.amazon.aws.operators.s3 import S3ListOperator
with DAG(dag_id='mapped_s3', start_date=datetime(2022, 5, 19)) as dag:
files = S3ListOperator(
task_id="get_inputs",
bucket="airflow-summit-2022",
prefix="data_provider_a/{{ data_interval_end | ds }}/",
delimiter='/',
do_xcom_push=True,
)
@task
def csv_to_json(aws_conn_id, input_bucket, key, output_bucket):
hook = S3Hook(aws_conn_id=aws_conn_id)
csv_data = hook.read_key(key, input_bucket)
reader = csv.DictReader(io.StringIO(csv_data))
output = io.BytesIO()
for row in reader:
output.write(json.dumps(row, indent=None).encode('utf-8'))
output.write(b"\n")
output.seek(0, os.SEEK_SET)
hook.load_file_obj(output, key=key.replace(".csv", ".json"), bucket_name=output_bucket)
csv_to_json.partial(
aws_conn_id="aws_default", input_bucket=files.bucket, output_bucket="airflow-summit-2022-processed"
).expand(key=XComArg(files))
```
Error:
```
File "/home/ash/code/airflow/airflow/airflow/jobs/local_task_job.py", line 253, in _run_mini_scheduler_on_child_tasks
info = dag_run.task_instance_scheduling_decisions(session)
File "/home/ash/code/airflow/airflow/airflow/utils/session.py", line 68, in wrapper
return func(*args, **kwargs)
File "/home/ash/code/airflow/airflow/airflow/models/dagrun.py", line 658, in task_instance_scheduling_decisions
schedulable_tis, changed_tis, expansion_happened = self._get_ready_tis(
File "/home/ash/code/airflow/airflow/airflow/models/dagrun.py", line 714, in _get_ready_tis
expanded_tis, _ = schedulable.task.expand_mapped_task(self.run_id, session=session)
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 609, in expand_mapped_task
operator.mul, self._resolve_map_lengths(run_id, session=session).values()
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 591, in _resolve_map_lengths
expansion_kwargs = self._get_expansion_kwargs()
File "/home/ash/code/airflow/airflow/airflow/models/mappedoperator.py", line 526, in _get_expansion_kwargs
return getattr(self, self._expansion_kwargs_attr)
AttributeError: 'MappedOperator' object has no attribute 'mapped_op_kwargs'
```
_Originally posted by @ashb in https://github.com/apache/airflow/issues/21877#issuecomment-1133409500_ | https://github.com/apache/airflow/issues/23838 | https://github.com/apache/airflow/pull/24772 | 1abdf3fd1e048f53e061cc9ad59177be7b5245ad | 6fd06fa8c274b39e4ed716f8d347229e017ba8e5 | 2022-05-20T21:41:26Z | python | 2022-07-05T09:36:33Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,833 | ["airflow/decorators/base.py", "airflow/models/mappedoperator.py", "airflow/serialization/serialized_objects.py", "tests/serialization/test_dag_serialization.py"] | Dynamic Task Mapping not working with op_kwargs in PythonOperator | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The following DAG was written and expected to generate 3 tasks (one for each string in the list)
**dag_code**
```python
import logging
from airflow.decorators import dag, task
from airflow.operators.python import PythonOperator
from airflow.utils.dates import datetime
def log_strings_operator(string, *args, **kwargs):
logging.info("we've made it into the method")
logging.info(f"operator log - {string}")
@dag(
dag_id='dynamic_dag_test',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
catchup=False,
tags=['example', 'dynamic_tasks']
)
def tutorial_taskflow_api_etl():
op2 = (PythonOperator
.partial(task_id="logging_with_operator_task",
python_callable=log_strings_operator)
.expand(op_kwargs=[{"string": "a"}, {"string": "b"}, {"string": "c"}]))
return op2
tutorial_etl_dag = tutorial_taskflow_api_etl()
```
**error message**
```python
Broken DAG: [/usr/local/airflow/dags/dynamic_dag_test.py] Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 343, in _serialize
return SerializedBaseOperator.serialize_mapped_operator(var)
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 608, in serialize_mapped_operator
assert op_kwargs[Encoding.TYPE] == DAT.DICT
TypeError: list indices must be integers or slices, not Encoding
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1105, in to_dict
json_dict = {"__version": cls.SERIALIZER_VERSION, "dag": cls.serialize_dag(var)}
File "/usr/local/lib/python3.9/site-packages/airflow/serialization/serialized_objects.py", line 1013, in serialize_dag
raise SerializationError(f'Failed to serialize DAG {dag.dag_id!r}: {e}')
airflow.exceptions.SerializationError: Failed to serialize DAG 'dynamic_dag_test': list indices must be integers or slices, not Encoding
```
### What you think should happen instead
Dag should contain 1 task `logging_with_operator_task` that contains 3 indices
### How to reproduce
copy/paste dag code into a dag file and run on airflow 2.3.0. Airflow UI will flag the error
### Operating System
Debian
### Versions of Apache Airflow Providers
_No response_
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23833 | https://github.com/apache/airflow/pull/23860 | 281e54b442f0a02bda53ae847aae9f371306f246 | 5877f45d65d5aa864941efebd2040661b6f89cb1 | 2022-05-20T17:19:23Z | python | 2022-06-22T07:48:50Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,826 | ["airflow/providers/google/cloud/operators/bigquery.py", "tests/providers/google/cloud/operators/test_bigquery.py"] | BigQueryInsertJobOperator is broken on any type of job except `query` | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==7.0.0
### Apache Airflow version
2.2.5
### Operating System
MacOS 12.2.1
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### What happened
We are using `BigQueryInsertJobOperator` to load data from parquet files in Google Cloud Storage with this kind of configuration:
```
BigQueryInsertJobOperator(
task_id="load_to_bq",
configuration={
"load": {
"writeDisposition": "WRITE_APPEND",
"createDisposition": "CREATE_IF_NEEDED",
"destinationTable": destination_table,
"sourceUris": source_files
"sourceFormat": "PARQUET"
}
}
```
After upgrade to `apache-airflow-providers-google==7.0.0` all load jobs are now broken. I believe that problem lies in this line: https://github.com/apache/airflow/blob/5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2/airflow/providers/google/cloud/operators/bigquery.py#L2170
So it's trying to get the destination table from `query` job config and makes it impossible to use any other type of job.
### What you think should happen instead
_No response_
### How to reproduce
Use BigQueryInsertJobOperator to submit any type of job except `query`
### Anything else
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/providers/google/cloud/operators/bigquery.py", line 2170, in execute
table = job.to_api_repr()["configuration"]["query"]["destinationTable"]
KeyError: 'query'
```
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23826 | https://github.com/apache/airflow/pull/24165 | 389e858d934a7813c7f15ab4e46df33c5720e415 | a597a76e8f893865e7380b072de612763639bfb9 | 2022-05-20T09:58:37Z | python | 2022-06-03T17:52:45Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,824 | ["airflow/jobs/scheduler_job.py", "tests/jobs/test_scheduler_job.py"] | Race condition between Triggerer and Scheduler | ### Apache Airflow version
2.2.5
### What happened
Deferable tasks, that trigger instantly after getting defered, might get its state set to `FAILED` by the scheduler.
The triggerer can fire the trigger and scheduler can re-queue the task instance before it has a chance to process the executor event for when the ti got defered.
### What you think should happen instead
This code block should not run in this instance:
https://github.com/apache/airflow/blob/5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2/airflow/jobs/scheduler_job.py#L667-L692
### How to reproduce
Most importantly have a trigger, that instantly fires. I'm not sure if the executor type is important - I'm running `CeleryExecutor`. Also having two schedulers might be important.
### Operating System
Arch Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23824 | https://github.com/apache/airflow/pull/23846 | 94f4f81efb8c424bee8336bf6b8720821e48898a | 66ffe39b0b3ae233aeb80e77eea1b2b867cc8c45 | 2022-05-20T09:22:34Z | python | 2022-06-28T22:14:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,823 | ["airflow/providers_manager.py"] | ModuleNotFoundExceptions not matched as optional features | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The `providers_manager.py` logs an import warning with stack trace (see example) for optional provider features instead of an info message noting the optional feature is disabled. Sample message:
```
[2022-05-19 21:46:53,065] {providers_manager.py:223} WARNING - Exception when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/airflow/providers_manager.py", line 257, in _sanity_check
imported_class = import_string(class_name)
File "/usr/local/lib/python3.9/site-packages/airflow/utils/module_loading.py", line 32, in import_string
module = import_module(module_path)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/local/lib/python3.9/site-packages/airflow/providers/google/cloud/hooks/compute_ssh.py", line 23, in <module>
import paramiko
ModuleNotFoundError: No module named 'paramiko'
```
### What you think should happen instead
There is explicit code for catching `ModuleNotFoundException`s so these import errors should be logged as info messages like:
```
[2022-05-20 08:18:54,680] {providers_manager.py:215} INFO - Optional provider feature disabled when importing 'airflow.providers.google.cloud.hooks.compute_ssh.ComputeEngineSSHHook' from 'apache-airflow-providers-google' package
```
### How to reproduce
Install the `google` provider but do not install the `ssh` submodule (or alternatively the `mysql` module). Various airflow components will produce the above warning logs.
### Operating System
Debian bullseye
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other Docker-based deployment
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23823 | https://github.com/apache/airflow/pull/23825 | 5bfacf81c63668ea63e7cb48f4a708a67d0ac0a2 | 6f5749c0d04bd732b320fcbe7713f2611e3d3629 | 2022-05-20T08:44:08Z | python | 2022-05-20T12:08:04Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,822 | ["airflow/providers/amazon/aws/example_dags/example_dms.py", "airflow/providers/amazon/aws/operators/rds.py", "docs/apache-airflow-providers-amazon/operators/rds.rst", "tests/providers/amazon/aws/operators/test_rds.py", "tests/system/providers/amazon/aws/rds/__init__.py", "tests/system/providers/amazon/aws/rds/example_rds_instance.py"] | Add an AWS operator for Create RDS Database | ### Description
@eladkal suggested we add the operator and then incorporate it into https://github.com/apache/airflow/pull/23681. I have a little bit of a backlog right now trying to get the System Tests up and running for AWS, but if someone wants to get to it before me, it should be a pretty easy first contribution.
The required API call is documented [here](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.create_db_instance) and I'm happy to help with any questions and./or help review it if someone wants to take a stab at it before I get the time.
### Use case/motivation
_No response_
### Related issues
Could be used to simplify https://github.com/apache/airflow/pull/23681
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23822 | https://github.com/apache/airflow/pull/24099 | c7feb31786c7744da91d319f499d9f6015d82454 | bf727525e1fd777e51cc8bc17285f6093277fdef | 2022-05-20T01:28:34Z | python | 2022-06-28T19:32:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,799 | ["docs/apache-airflow-providers-google/operators/cloud/cloud_sql.rst"] | Fix typo in cloud_sql.rst | ### What do you see as an issue?
CloudSqlInstanceImportOperator does not exist.
https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/cloud_sql.html#cloudsqlinstanceimportoperator
<img width="1040" alt="image" src="https://user-images.githubusercontent.com/12693596/169287518-45e8dc8b-7e57-4357-ae11-ffa2642e1498.png">
### Solving the problem
Update CloudSqlInstanceImportOperator to CloudSQLImportInstanceOperator.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23799 | https://github.com/apache/airflow/pull/23800 | 8f3ce33e921056db3bac34dcf38b238cbf417279 | 54aa23405240b9db553bf639de85b2d86364420b | 2022-05-19T11:55:41Z | python | 2022-05-19T23:02:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,796 | ["airflow/config_templates/airflow_local_settings.py", "airflow/config_templates/config.yml", "airflow/config_templates/default_airflow.cfg", "airflow/utils/log/colored_log.py", "airflow/utils/log/timezone_aware.py", "airflow/www/static/js/grid/details/content/taskInstance/Logs/utils.js", "airflow/www/static/js/ti_log.js", "newsfragments/24373.significant.rst"] | Webserver shows wrong datetime (timezone) in log | ### Apache Airflow version
2.3.0 (latest released)
### What happened
same as #19401 , when I open task`s log in web interface, it shifts this time forward by 8 hours (for Asia/Shanghai), but it's already in Asia/Shanghai.
here is the log in web:
```
*** Reading local file: /opt/airflow/logs/forecast/cal/2022-05-18T09:50:00+00:00/1.log
[2022-05-19, 13:54:52] {taskinstance.py:1037} INFO ...
```
As you seee, UTC time is 2022-05-18T09:50:00,and My timezone is Asia/Shanghai(should shift forward 8 hours),but it shift forward 16hours!
### What you think should happen instead
_No response_
### How to reproduce
_No response_
### Operating System
Debian GNU/Linux 11 (bullseye)(docker)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
1. build my docker image from apache/airflow:2.3.0 to change timezone
```Dockerfile
FROM apache/airflow:2.3.0
# bugfix of log UI in web, here I change ti_log.js file by following on #19401
COPY ./ti_log.js /home/airflow/.local/lib/python3.7/site-packages/airflow/www/static/js/ti_log.js
USER root
# container timezone changed to CST time
RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& rm -rf /etc/timezone \
&& echo Asia/Shanghai >> /etc/timezone \
&& chown airflow /home/airflow/.local/lib/python3.7/site-packages/airflow/www/static/js/ti_log.js
USER airflow
```
2. use my image to run airflow by docker-compose
3. check task logs in web
Although I have changed the file `airflow/www/static/js/ti_log.js`, but it did not work! Then check source from Web, I found another file : `airflow/www/static/dist/tiLog.e915520196109d459cf8.js`, then I replace "+00:00" by "+08:00" in this file. Finally it works!
```js
# origin tiLog.e915520196109d459cf8.js
replaceAll(c,(e=>`<time datetime="${e}+00:00" data-with-tz="true">${Object(a.f)(`${e}+00:00`)}</time>`))
```
```js
# what I changed
replaceAll(c,(e=>`<time datetime="${e}+08:00" data-with-tz="true">${Object(a.f)(`${e}+08:00`)}</time>`))
```
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23796 | https://github.com/apache/airflow/pull/24373 | 5a8209e5096528b6f562efebbe71b6b9c378aaed | 7de050ceeb381fb7959b65acd7008e85b430c46f | 2022-05-19T09:11:51Z | python | 2022-06-24T13:17:24Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,792 | ["airflow/models/expandinput.py", "tests/models/test_mappedoperator.py"] | Dynamic task mapping creates too many mapped instances when task pushed non-default XCom | ### Apache Airflow version
2.3.0 (latest released)
### What happened
Excess tasks are created when using dynamic task mapping with KubernetesPodOperator, but only in certain cases which I do not understand. I have a simple working example of this where the flow is:
- One task that returns a list XCom (list of lists, since I'm partial-ing to `KubernetesPodOperator`'s `arguments`) of length 3. This looks like `[["a"], ["b"], ["c"]]`
- A `partial` from this, which is expanded on the above's result. Each resulting task has an XCom of a single element list that looks like `["d"]`. We expect the `expand` to result in 3 tasks, which it does. So far so good. Why doesn't the issue occur at this stage? No clue.
- A `partial` from the above. We expect 3 tasks in this final stage, but get 9. 3 succeed and 6 fail consistently. This 3x rule scales to as many tasks as you define in step 2 (e.g. 2 tasks in step 2 -> 6 tasks in step 3, where 4 fail)

If I recreate this using the TaskFlow API with `PythonOperator`s, I get the expected result of 1 task -> 3 tasks -> 3 tasks

Futhermore, if I attempt to look at the `Rendered Template` of the failed tasks in the `KubernetesPodOperator` implementation (first image), I consistently get `Error rendering template` and all the fields are `None`. The succeeded tasks look normal.

Since the `Rendered Template` view fails to load, I can't confirm what is actually getting provided to these failing tasks' `argument` parameter. If there's a way I can query the meta database to see this, I'd be glad to if given instruction.
### What you think should happen instead
I think this has to do with how XComs are specially handled with the `KubernetesPodOperator`. If we look at the XComs tab of the upstream task (`some-task-2` in the above images), we see that the return value specifies `pod_name` and `pod_namespace` along with `return_value`.

Whereas in the `t2` task of the TaskFlow version, it only contains `return_value`.

I haven't dug through the code to verify, but I have a strong feeling these extra values `pod_name` and `pod_namespace` are being used to generate the `OperatorPartial`/`MappedOperator` as well when they shouldn't be.
### How to reproduce
Run this DAG in a k8s context:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='test-pod-xcoms',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps([["a"], ["b"], ["c"]]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-1',
task_id='some-task-1',
do_xcom_push=True
)
op2 = make_partial_operator(
cmds=['python3', '-c' 'import json;f=open("/airflow/xcom/return.json", "w");f.write(json.dumps(["d"]))'],
image='python:3.9-alpine',
name='airflow-private-image-pod-2',
task_id='some-task-2',
do_xcom_push=True
)
op3 = make_partial_operator(
cmds=['echo', 'helloworld'],
image='alpine:latest',
name='airflow-private-image-pod-3',
task_id='some-task-3',
)
op3.expand(arguments=XComArg(op2.expand(arguments=XComArg(op1))))
```
For the TaskFlow version of this that works, run this DAG (doesn't have to be k8s context):
```
from datetime import datetime
from airflow.decorators import task
from airflow.models import DAG, Variable
@task
def t1():
return [[1], [2], [3]]
@task
def t2(val):
return val
@task
def t3(val):
print(val)
with DAG(dag_id='test-mapping',
schedule_interval=None,
start_date=datetime(2020, 1, 1)) as dag:
t3.partial().expand(val=t2.partial().expand(val=t1()))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23792 | https://github.com/apache/airflow/pull/24530 | df388a3d5364b748993e61b522d0b68ff8b8124a | a69095fea1722e153a95ef9da93b002b82a02426 | 2022-05-19T01:02:41Z | python | 2022-07-27T08:36:23Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,786 | ["airflow/www/utils.py", "airflow/www/views.py"] | DAG Loading Slow with Dynamic Tasks - Including Test Results and Benchmarking | ### Apache Airflow version
2.3.0 (latest released)
### What happened
The web UI is slow to load the default (grid) view for DAGs when there are mapped tasks with a high number of expansions.
I did some testing with DAGs that have a variable number of tasks, along with changing the webserver resources to see how this affects the load times.
Here is a graph showing that testing. Let me know if you have any other questions about this.
<img width="719" alt="image" src="https://user-images.githubusercontent.com/89415310/169158337-ffb257ae-21bc-4c19-aaec-b29873d9fe93.png">
My findings based on what I'm seeing here:
The jump from 5->10 AUs makes a difference but 10 to 20 does not make a difference. There are diminishing returns when bumping up the webserver resources which leads me to believe that this could be a factor of database performance after the webserver is scaled to a certain point.
If we look at the graph on a log scale, it's almost perfectly linear for the 10 and 20AU lines on the plot. This leads me to believe that the time that it takes to load is a direct function of the number of task expansions that we have for a mapped task.
### What you think should happen instead
Web UI loads in a reasonable amount of time, anything less than 10 seconds would be acceptable relatively speaking with the performance that we're getting now, ideally somewhere under 2-3 second I think would be best, if possible.
### How to reproduce
```
from datetime import datetime
from airflow.models import DAG
from airflow.operators.empty import EmptyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email_on_failure': False,
'email_on_retry': False,
}
initial_scale = 7
max_scale = 12
scaling_factor = 2
for scale in range(initial_scale, max_scale + 1):
dag_id = f"dynamic_task_mapping_{scale}"
with DAG(
dag_id=dag_id,
default_args=default_args,
catchup=False,
schedule_interval=None,
start_date=datetime(1970, 1, 1),
render_template_as_native_obj=True,
) as dag:
start = EmptyOperator(task_id="start")
mapped = PythonOperator.partial(
task_id="mapped",
python_callable=lambda m: print(m),
).expand(
op_args=[[x] for x in list(range(2**scale))]
)
end = EmptyOperator(task_id="end")
start >> mapped >> end
globals()[dag_id] = dag
```
### Operating System
Debian
### Versions of Apache Airflow Providers
n/a
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23786 | https://github.com/apache/airflow/pull/23813 | 86cfd1244a641a8f17c9b33a34399d9be264f556 | 7ab5ea7853df9d99f6da3ab804ffe085378fbd8a | 2022-05-18T21:23:59Z | python | 2022-05-20T04:18:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,783 | ["airflow/providers/cncf/kubernetes/operators/kubernetes_pod.py", "kubernetes_tests/test_kubernetes_pod_operator.py", "kubernetes_tests/test_kubernetes_pod_operator_backcompat.py"] | Partial of a KubernetesPodOperator does not allow for limit_cpu and limit_memory in the resources argument | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When performing dynamic task mapping and providing Kubernetes limits to the `resources` argument, the DAG raises an import error:
```
Broken DAG: [/opt/airflow/dags/bulk_image_processing.py] Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 287, in partial
partial_kwargs["resources"] = coerce_resources(partial_kwargs["resources"])
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/baseoperator.py", line 133, in coerce_resources
return Resources(**resources)
TypeError: __init__() got an unexpected keyword argument 'limit_cpu'
```
The offending code is:
```
KubernetesPodOperator.partial(
get_logs: True,
in_cluster: True,
is_delete_operator_pod: True,
namespace: settings.namespace,
resources={'limit_cpu': settings.IMAGE_PROCESSING_OPERATOR_CPU, 'limit_memory': settings.IMAGE_PROCESSING_OPERATOR_MEMORY},
service_account_name: settings.SERVICE_ACCOUNT_NAME,
startup_timeout_seconds: 600,
**kwargs,
)
```
But you can see this in any DAG utilizing a `KubernetesPodOperator.partial` where the `partial` contains the `resources` argument.
### What you think should happen instead
The `resources` argument should be taken at face value and applied to the `OperatorPartial` and subsequently the `MappedOperator`.
### How to reproduce
Try to import this DAG using Airflow 2.3.0:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
resources={'limit_cpu': '2000m', 'limit_memory': '16Gi'},
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
).expand(arguments=XComArg(op1))
```
### Operating System
MacOS 11.6.5
### Versions of Apache Airflow Providers
Relevant:
```
apache-airflow-providers-cncf-kubernetes==4.0.1
```
Full:
```
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-docker==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-grpc==2.0.4
apache-airflow-providers-hashicorp==2.2.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-mysql==2.2.3
apache-airflow-providers-odbc==2.0.4
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-sendgrid==2.0.4
apache-airflow-providers-sftp==2.6.0
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-sqlite==2.1.3
apache-airflow-providers-ssh==2.4.3
```
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
Docker (Docker Desktop)
- Server Version: 20.10.13
- API Version: 1.41
- Builder: 2
Kubernetes (Docker Desktop)
- Env: docker-desktop
- Context: docker-desktop
- Cluster Name: docker-desktop
- Namespace: default
- Container Runtime: docker
- Version: v1.22.5
Helm:
- version.BuildInfo{Version:"v3.6.3", GitCommit:"d506314abfb5d21419df8c7e7e68012379db2354", GitTreeState:"dirty", GoVersion:"go1.16.5"}
### Anything else
You can get around this by creating the `partial` first without calling `expand` on it, setting the resources via the `kwargs` parameter, then calling `expand`. Example:
```
from datetime import datetime
from airflow import XComArg
from airflow.models import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
def make_operator(
**kwargs
):
return KubernetesPodOperator(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
def make_partial_operator(
**kwargs
):
return KubernetesPodOperator.partial(
**{
'get_logs': True,
'in_cluster': True,
'is_delete_operator_pod': True,
'namespace': 'default',
'startup_timeout_seconds': 600,
**kwargs,
}
)
with DAG(dag_id='bulk_image_processing',
schedule_interval=None,
start_date=datetime(2020, 1, 1),
max_active_tasks=20) as dag:
op1 = make_operator(
arguments=['--bucket-name', f'{{{{ dag_run.conf.get("bucket", "some-fake-default") }}}}'],
cmds=['python3', 'some_entrypoint'],
image='some-image',
name='airflow-private-image-pod-1',
task_id='some-task',
do_xcom_push=True
)
op2 = make_partial_operator(
image='another-image',
name=f'airflow-private-image-pod-2',
task_id='another-task',
cmds=[
'some',
'set',
'of',
'cmds'
]
)
op2.kwargs['resources'] = {'limit_cpu': '2000m', 'limit_memory': '16Gi'}
op2.expand(arguments=XComArg(op1))
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23783 | https://github.com/apache/airflow/pull/24673 | 40f08900f2d1fb0d316b40dde583535a076f616b | 45f4290712f5f779e57034f81dbaab5d77d5de85 | 2022-05-18T18:46:44Z | python | 2022-06-28T06:45:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,772 | ["airflow/www/utils.py", "airflow/www/views.py"] | New grid view in Airflow 2.3.0 has very slow performance on large DAGs relative to tree view in 2.2.5 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I upgraded a local dev deployment of Airflow from 2.2.5 to 2.3.0, then loaded the new `/dags/<dag_id>/grid` page for a few dag ids.
On a big DAG, I’m seeing 30+ second latency on the `/grid` API, followed by a 10+ second delay each time I click a green rectangle. For a smaller DAG I tried, the page was pretty snappy.
I went back to 2.2.5 and loaded the tree view for comparison, and saw that the `/tree/` endpoint on the large DAG had 9 seconds of latency, and clicking a green rectangle had instant responsiveness.
This is slow enough that it would be a blocker for my team to upgrade.
### What you think should happen instead
The grid view should be equally performant to the tree view it replaces
### How to reproduce
Generate a large DAG. Mine looks like the following:
- 900 tasks
- 150 task groups
- 25 historical runs
Compare against a small DAG, in my case:
- 200 tasks
- 36 task groups
- 25 historical runs
The large DAG is unusable, the small DAG is usable.
### Operating System
Ubuntu 20.04.3 LTS (Focal Fossa)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Docker-Compose
### Deployment details
Docker-compose deployment on an EC2 instance running ubuntu.
Airflow web server is nearly stock image from `apache/airflow:2.3.0-python3.9`
### Anything else
Screenshot of load time:
<img width="1272" alt="image" src="https://user-images.githubusercontent.com/643593/168957215-74eefcb0-578e-46c9-92b8-74c4a6a20769.png">
GIF of click latency:

### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23772 | https://github.com/apache/airflow/pull/23947 | 5ab58d057abb6b1f28eb4e3fb5cec7dc9850f0b0 | 1cf483fa0c45e0110d99e37b4e45c72c6084aa97 | 2022-05-18T04:37:28Z | python | 2022-05-26T19:53:22Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,765 | ["airflow/providers/amazon/aws/hooks/emr.py", "airflow/providers/amazon/aws/operators/emr.py", "tests/providers/amazon/aws/operators/test_emr_containers.py"] | Add support of tags assignment to job runs in EmrContainerOperator | ### Description
AWS allows to assign the tags to the individual EMR job runs on a virtual cluster. However, the [EmrContainerOperator](https://github.com/apache/airflow/blob/820d8eb7c631eea7f9d5cbd23840d026e99ae304/airflow/providers/amazon/aws/operators/emr.py#L121) currently does not provide the ability to pass the custom tags for the job run.
Related docs:https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/emr-containers.html#EMRContainers.Client.start_job_run
> tags (dict) --
> The tags assigned to job runs.
### Use case/motivation
**Motivation**
In our deployment, we have compliance requirements which require us to specify a custom set of tags for each job submitted on the EMR virtual cluster(a shared cluster). There is an alternative way to archive this, but we would like to have this as a feature.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23765 | https://github.com/apache/airflow/pull/23769 | 65f3b18fc1142c0d23e715fa1a98f21662df9584 | e54ca47262579742fc3c297c7f8d4c48f2437f82 | 2022-05-17T20:04:10Z | python | 2022-05-22T14:04:58Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,756 | ["airflow/providers/amazon/aws/operators/eks.py"] | Amazon EKS example DAG raises warnning | ### Body
```
Importing package: airflow.providers.amazon.aws.example_dags
[2022-05-17 15:53:10,675] {eks.py:287} WARNING - The nodegroup_subnets should be List or string representing Python list and is {{ dag_run.conf['nodegroup_subnets'] }}. Defaulting to []
```
https://github.com/apache/airflow/blob/dec05fb6b29f2aff454bcbc3939b3b78ba5b785f/airflow/providers/amazon/aws/example_dags/example_eks_templated.py#L82
example for waning raising in the CI:
https://github.com/apache/airflow/runs/6474027737?check_suite_focus=true#step:12:1010
We should fix it. The example DAG should not yield warnings.
### Committer
- [X] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | https://github.com/apache/airflow/issues/23756 | https://github.com/apache/airflow/pull/23849 | 6150d283234b48f86362fd4da856e282dd91ebb4 | 5d2296becb9401df6ca58bb7d15d6655eb168aed | 2022-05-17T16:04:02Z | python | 2022-05-22T14:25:20Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,733 | ["airflow/www/templates/airflow/dag.html"] | Task Instance pop-up menu - some buttons not always clickable | ### Apache Airflow version
2.3.0 (latest released)
### What happened
See recorded screencap - in the task instance pop-up menu, sometimes the top menu options aren't clickable until you move the mouse around a bit and find an area where it will allow you to click
This only seems to affect the `Instance Details`, `Rendered, Log`, and `XCom` options - but not `List Instances, all runs`, or `Filter Upstream`
https://user-images.githubusercontent.com/15913202/168657933-532f58c6-7f33-4693-80cf-26436ff78ceb.mp4
### What you think should happen instead
The entire 'bubble' for the options such as 'XCom' should always be clickable, without having to find a 'sweet spot'
### How to reproduce
I am using Astro Runtime 5.0.0 in a localhost environment
### Operating System
macOS 11.5.2
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon==3.3.0
apache-airflow-providers-celery==2.1.4
apache-airflow-providers-cncf-kubernetes==4.0.1
apache-airflow-providers-databricks==2.6.0
apache-airflow-providers-elasticsearch==3.0.3
apache-airflow-providers-ftp==2.1.2
apache-airflow-providers-google==6.8.0
apache-airflow-providers-http==2.1.2
apache-airflow-providers-imap==2.2.3
apache-airflow-providers-microsoft-azure==3.8.0
apache-airflow-providers-postgres==4.1.0
apache-airflow-providers-redis==2.0.4
apache-airflow-providers-slack==4.2.3
apache-airflow-providers-snowflake==2.6.0
apache-airflow-providers-sqlite==2.1.3
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
I experience this in an Astro deployment as well (not just localhost) using the same runtime 5.0.0 image
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23733 | https://github.com/apache/airflow/pull/23736 | 71e4deb1b093b7ad9320eb5eb34eca8ea440a238 | 239a9dce5b97d45620862b42fd9018fdc9d6d505 | 2022-05-16T18:28:42Z | python | 2022-05-17T02:58:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,727 | ["airflow/exceptions.py", "airflow/executors/kubernetes_executor.py", "airflow/kubernetes/pod_generator.py", "airflow/models/taskinstance.py", "tests/executors/test_kubernetes_executor.py", "tests/kubernetes/test_pod_generator.py", "tests/models/test_taskinstance.py"] | Airflow 2.3 scheduler error: 'V1Container' object has no attribute '_startup_probe' | ### Apache Airflow version
2.3.0 (latest released)
### What happened
After migrating from Airflow 2.2.4 to 2.3.0 scheduler fell into crash loop throwing:
```
--- Logging error ---
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute
self._run_scheduler_loop()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 826, in _run_scheduler_loop
self.executor.heartbeat()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/base_executor.py", line 171, in heartbeat
self.sync()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 613, in sync
self.kube_scheduler.run_next(task)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 300, in run_next
self.log.info('Kubernetes job is %s', str(next_job).replace("\n", " "))
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 214, in __repr__
return self.to_str()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 210, in to_str
return pprint.pformat(self.to_dict())
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 196, in to_dict
result[attr] = value.to_dict()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1070, in to_dict
result[attr] = list(map(
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1071, in <lambda>
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 672, in to_dict
value = getattr(self, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 464, in startup_probe
return self._startup_probe
AttributeError: 'V1Container' object has no attribute '_startup_probe'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/usr/local/lib/python3.9/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/usr/local/lib/python3.9/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 214, in __repr__
return self.to_str()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 210, in to_str
return pprint.pformat(self.to_dict())
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod.py", line 196, in to_dict
result[attr] = value.to_dict()
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1070, in to_dict
result[attr] = list(map(
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_pod_spec.py", line 1071, in <lambda>
lambda x: x.to_dict() if hasattr(x, "to_dict") else x,
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 672, in to_dict
value = getattr(self, attr)
File "/home/airflow/.local/lib/python3.9/site-packages/kubernetes/client/models/v1_container.py", line 464, in startup_probe
return self._startup_probe
AttributeError: 'V1Container' object has no attribute '_startup_probe'
Call stack:
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler
_run_scheduler_job(args=args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_scheduler_job
job.run()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/base_job.py", line 244, in run
self._execute()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/jobs/scheduler_job.py", line 757, in _execute
self.executor.end()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 809, in end
self._flush_task_queue()
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/executors/kubernetes_executor.py", line 767, in _flush_task_queue
self.log.warning('Executor shutting down, will NOT run task=%s', task)
Unable to print the message and arguments - possible formatting error.
Use the traceback above to help find the error.
```
kubernetes python library version was exactly as specified in constraints file: https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt
### What you think should happen instead
Scheduler should work
### How to reproduce
Not 100% sure but:
1. Run Airflow 2.2.4 using official Helm Chart
2. Run some dags to have some records in DB
3. Migrate to 2.3.0 (replace 2.2.4 image with 2.3.0 one)
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
irrelevant
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
KubernetesExecutor
PostgreSQL (RDS) as Airflow DB
Python 3.9
Docker images build from `apache/airflow:2.3.0-python3.9` (some additional libraries installed)
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23727 | https://github.com/apache/airflow/pull/24117 | c8fa9e96e29f9f8b4ff9c7db416097fb70a87c2d | 0c41f437674f135fe7232a368bf9c198b0ecd2f0 | 2022-05-16T15:09:06Z | python | 2022-06-15T04:30:53Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,722 | ["airflow/providers/google/cloud/operators/cloud_sql.py", "tests/providers/google/cloud/operators/test_cloud_sql.py"] | Add fields to CLOUD_SQL_EXPORT_VALIDATION | ### Apache Airflow Provider(s)
google
### Versions of Apache Airflow Providers
apache-airflow-providers-google==5.0.0
### Apache Airflow version
2.1.2
### Operating System
GCP Container
### Deployment
Composer
### Deployment details
composer-1.17.1-airflow-2.1.2
### What happened
I got a validation warning.
Same as #23613.
### What you think should happen instead
The following fields are not implemented in CLOUD_SQL_EXPORT_VALIDATION.
The following fields should be added to CLOUD_SQL_EXPORT_VALIDATION.
- sqlExportOptions
- mysqlExportOptions
- masterData
- csvExportOptions
- escapeCharacter
- quoteCharacter
- fieldsTerminatedBy
- linesTerminatedBy
These are all the fields that have not been added.
https://cloud.google.com/sql/docs/mysql/admin-api/rest/v1beta4/operations#exportcontext
### How to reproduce
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23722 | https://github.com/apache/airflow/pull/23724 | 9e25bc211f6f7bba1aff133d21fe3865dabda53d | 3bf9a1df38b1ccfaf965a207d047b30452df1ba5 | 2022-05-16T11:05:33Z | python | 2022-05-16T19:16:09Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,705 | ["chart/templates/redis/redis-statefulset.yaml", "chart/values.schema.json", "chart/values.yaml", "tests/charts/test_annotations.py"] | Adding PodAnnotations for Redis Statefulset | ### Description
Most Airflow services come with the ability of adding annotation, a part from Redis. This feature request adds this capability into the Redis helm template as well.
### Use case/motivation
Specifically for us, annotations and labels are used to integrate Airflow with external services, such as Datadog, and without it, the integration becomes a bit more complex.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23705 | https://github.com/apache/airflow/pull/23708 | ef79a0d1c4c0a041d7ebf83b93cbb25aa3778a70 | 2af19f16a4d94e749bbf6c7c4704e02aac35fc11 | 2022-05-14T07:46:23Z | python | 2022-07-11T21:27:26Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,698 | ["airflow/utils/db_cleanup.py"] | airflow db clean - table missing exception not captured | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I am running on the Kubernetes Executor, so Celery-related tables were never created. I am using PostgreSQL as the database.
When I ran `airflow db clean`, it gave me the following exception:
```
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "celery_taskmeta" does not exist
LINE 3: FROM celery_taskmeta
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/__main__.py", line 38, in main
args.func(args)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/cli_parser.py", line 51, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/cli.py", line 99, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/cli/commands/db_command.py", line 195, in cleanup_tables
run_cleanup(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/session.py", line 71, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 311, in run_cleanup
_cleanup_table(
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 228, in _cleanup_table
_print_entities(query=query, print_rows=False)
File "/home/airflow/.local/lib/python3.9/site-packages/airflow/utils/db_cleanup.py", line 137, in _print_entities
num_entities = query.count()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 3062, in count
return self._from_self(col).enable_eagerloads(False).scalar()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2803, in scalar
ret = self.one()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2780, in one
return self._iter().one()
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/query.py", line 2818, in _iter
result = self.session.execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1670, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1520, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 313, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1389, in _execute_clauseelement
ret = self._execute_context(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1748, in _execute_context
self._handle_dbapi_exception(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1929, in _handle_dbapi_exception
util.raise_(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_context
self.dialect.do_execute(
File "/home/airflow/.local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 716, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "celery_taskmeta" does not exist
LINE 3: FROM celery_taskmeta
^
[SQL: SELECT count(*) AS count_1
FROM (SELECT celery_taskmeta.id AS celery_taskmeta_id, celery_taskmeta.task_id AS celery_taskmeta_task_id, celery_taskmeta.status AS celery_taskmeta_status, celery_taskmeta.result AS celery_taskmeta_result, celery_taskmeta.date_done AS celery_taskmeta_date_done, celery_taskmeta.traceback AS celery_taskmeta_traceback
FROM celery_taskmeta
WHERE celery_taskmeta.date_done < %(date_done_1)s) AS anon_1]
[parameters: {'date_done_1': DateTime(2022, 1, 1, 0, 0, 0, tzinfo=Timezone('UTC'))}]
(Background on this error at: http://sqlalche.me/e/14/f405)
```
### What you think should happen instead
_No response_
### How to reproduce
1. Use an executor that do not require Celery
2. Use PostgreSQL as the database
3. Run `airflow db clean`
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23698 | https://github.com/apache/airflow/pull/23699 | 252ef66438ecda87a8aac4beed1f689f14ee8bec | a80b2fcaea984813995d4a2610987a1c9068fdb5 | 2022-05-13T11:35:22Z | python | 2022-05-19T16:47:56Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,693 | ["airflow/executors/kubernetes_executor.py", "tests/executors/test_kubernetes_executor.py"] | Airflow backfill command not working; Tasks are stuck in scheduled forever. | ### Apache Airflow version
2.2.1
### What happened
When I try to backfill using `airflow dags backfill ...`, the 'Run' shows running but the tasks are stuck in scheduled state forever.
1. Code for DAG to reproduce the problem
```python
import time
from datetime import timedelta
import pendulum
from airflow import DAG
from airflow.decorators import task
from airflow.models.dag import dag
from airflow.operators.bash import BashOperator
from airflow.operators.dummy import DummyOperator
from airflow.operators.python import PythonOperator
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['airflow@example.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 2,
'retry_delay': timedelta(minutes=5),
}
def get_execution_date(**kwargs):
ds = kwargs['ds']
print(ds)
with DAG(
'test_dag_2',
default_args=default_args,
description='Testing dag',
start_date=pendulum.datetime(2022, 4, 2, tz='UTC'),
schedule_interval="@hourly", max_active_runs=5, concurrency=10,
) as dag:
t1 = BashOperator(
task_id='task_1',
depends_on_past=False,
bash_command='sleep 5'
)
t2 = PythonOperator(
task_id='get_execution_date',
python_callable=get_execution_date
)
t1 >> t2
```
2. Airflow backfill CLI
```bash
$ airflow dags backfill --subdir /opt/airflow/dags/repo --reset-dagruns -s "2022-03-01 01:00:00" -e "2022-03-02 01:00:00" test_dag_2
```
3. Result
<img width="472" alt="image" src="https://user-images.githubusercontent.com/36870121/168201159-fcdf7640-2bf9-428f-849e-25f18dd2b504.png">
4. Discussion
The 'Run' shows running but the tasks are stuck in scheduled state forever.
Here are some variants what I have checked. (but still have this issue.)
- max_active_runs, concurrency
- -x option for backfill command
- --subdir option for backfill command
- --reset-dagruns for backfill command
- schedule_interval to `None` and `@once`
- Restart the Airflow scheduler pods.
- **Upgrade Airflow version to 2.3.0.**
5. Related issue
https://github.com/apache/airflow/issues/13542
### What you think should happen instead
The backfill jobs should successfully done from the past.
### How to reproduce
_No response_
### Operating System
CentOS Linux release 7.9.2009 (Core)
### Versions of Apache Airflow Providers
apache-airflow-providers-amazon 3.3.0
apache-airflow-providers-celery 2.1.0
apache-airflow-providers-cncf-kubernetes 3.0.2
apache-airflow-providers-docker 2.6.0
apache-airflow-providers-elasticsearch 3.0.3
apache-airflow-providers-ftp 2.1.2
apache-airflow-providers-grpc 2.0.4
apache-airflow-providers-hashicorp 2.2.0
apache-airflow-providers-http 2.0.2
apache-airflow-providers-imap 2.2.3
apache-airflow-providers-postgres 4.1.0
apache-airflow-providers-redis 2.0.4
apache-airflow-providers-sendgrid 2.0.4
apache-airflow-providers-sftp 2.6.0
apache-airflow-providers-slack 4.2.3
apache-airflow-providers-sqlite 2.1.3
apache-airflow-providers-ssh 2.4.3
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else
Here are some logs from scheduler pod, found kubernetes_executor try to delete worker pod.
```bash
[2022-05-13 02:48:08,417] {kubernetes_executor.py:147} INFO - Event: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a had an event of type MODIFIED
[2022-05-13 02:48:08,417] {kubernetes_executor.py:206} INFO - Event: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a Succeeded
[2022-05-13 02:48:08,423] {kubernetes_executor.py:147} INFO - Event: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a had an event of type DELETED
[2022-05-13 02:48:08,423] {kubernetes_executor.py:206} INFO - Event: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a Succeeded
[2022-05-13 02:48:08,497] {backfill_job.py:397} INFO - [backfill progress] | finished run 0 of 25 | tasks waiting: 5 | succeeded: 0 | running: 5| failed: 0 | skipped: 0 | deadlocked: 0 | not ready: 5
[2022-05-13 02:48:13,390] {kubernetes_executor.py:375} INFO - Attempting to finish pod; pod_id: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a; state: None; annotations: {'dag_id': 'test_dag_2', 'task_id': 'task_1', 'execution_date': None, 'run_id': 'backfill__2022-03-03T02:00:00+00:00', 'try_number': '1'}
[2022-05-13 02:48:13,391] {kubernetes_executor.py:375} INFO - Attempting to finish pod; pod_id: testdag2task1.34ae944f04ee4a18a65f559163ca0d1a; state: None; annotations: {'dag_id': 'test_dag_2', 'task_id': 'task_1', 'execution_date': None, 'run_id': 'backfill__2022-03-03T02:00:00+00:00', 'try_number': '1'}
[2022-05-13 02:48:13,392] {kubernetes_executor.py:576} INFO - Changing state of (TaskInstanceKey(dag_id='test_dag_2', task_id='task_1', run_id='backfill__2022-03-03T02:00:00+00:00', try_number=1), None, 'testdag2task1.34ae944f04ee4a18a65f559163ca0d1a', 'jutopia-chlee-test-chlee-backfill-test', '3559062271') to None
[2022-05-13 02:48:13,396] {kubernetes_executor.py:661} INFO - Deleted pod: TaskInstanceKey(dag_id='test_dag_2', task_id='task_1', run_id='backfill__2022-03-03T02:00:00+00:00', try_number=1) in namespace jutopia-chlee-test-chlee-backfill-test
```
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23693 | https://github.com/apache/airflow/pull/23720 | 49cfb6498eed0acfc336a24fd827b69156d5e5bb | 640d4f9636d3867d66af2478bca15272811329da | 2022-05-13T03:05:56Z | python | 2022-11-18T01:09:31Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,692 | ["docs/apache-airflow/extra-packages-ref.rst"] | Conflicts with airflow constraints for airflow 2.3.0 python 3.9 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
When installing airflow 2.3.0 using pip command with "all" it fails on dependency google-ads
`pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on google-ads>=15.1.1; extra == "all"
The user requested (constraint) google-ads==14.0.0
I changed the version of google-ads to 15.1.1, but then it failed on dependency databricks-sql-connector
> The conflict is caused by:
apache-airflow[all] 2.3.0 depends on databricks-sql-connector<3.0.0 and >=2.0.0; extra == "all"
The user requested (constraint) databricks-sql-connector==1.0.2
and then on different dependencies...
### What you think should happen instead
_No response_
### How to reproduce
(venv) [root@localhost]# `python -V`
Python 3.9.7
(venv) [root@localhost]# `pip install "apache-airflow[all]==2.3.0" -c "https://raw.githubusercontent.com/apache/airflow/constraints-2.3.0/constraints-3.9.txt"`
### Operating System
CentOS 7
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23692 | https://github.com/apache/airflow/pull/23697 | 4afa8e3cecf1e4a2863715d14a45160034ad31a6 | 310002e44887847991b0864bbf9a921c7b11e930 | 2022-05-13T00:01:55Z | python | 2022-05-13T11:33:17Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,689 | ["airflow/timetables/_cron.py", "airflow/timetables/interval.py", "airflow/timetables/trigger.py", "tests/timetables/test_interval_timetable.py"] | Data Interval wrong when manually triggering with a specific logical date | ### Apache Airflow version
2.2.5
### What happened
When I use the date picker in the “Trigger DAG w/ config” page to choose a specific logical date for some reason on a scheduled daily DAG the Data Interval Start (circled in red) is 2 days before the logical date (circled in blue), instead of the same as the logical date. And the Data Interval End is one day before the logical date. So the interval is the correct length, but on wrong days.

I encountered this with a DAG with a daily schedule which typically runs at 09:30 UTC. I am testing this in a dev environment (with catchup off) and trying to trigger a run for 2022-05-09 09:30:00. I would expect the data interval to start at that same time and the data interval end to be 1 day after.
It has nothing to do with the previous run since that was way back on 2022-04-26
### What you think should happen instead
The data interval start date should be the same as the logical date (if it is a custom logical date)
### How to reproduce
I made a sample DAG as shown below:
```python
import pendulum
from airflow.models import DAG
from airflow.operators.python import PythonOperator
def sample(data_interval_start, data_interval_end):
return "data_interval_start: {}, data_interval_end: {}".format(str(data_interval_start), str(data_interval_end))
args = {
'start_date': pendulum.datetime(2022, 3, 10, 9, 30)
}
with DAG(
dag_id='sample_data_interval_issue',
default_args=args,
schedule_interval='30 9 * * *' # 09:30 UTC
) as sample_data_interval_issue:
task = PythonOperator(
task_id='sample',
python_callable=sample
)
```
I then start it to start a scheduled DAG run (`2022-05-11, 09:30:00 UTC`), and the `data_interval_start` is the same as I expect, `2022-05-11T09:30:00+00:00`.
However, when I went to "Trigger DAG w/ config" page and in the date chooser choose `2022-05-09 09:30:00+00:00`, and then triggered that. It shows the run datetime is `2022-05-09, 09:30:00 UTC`, but the `data_interval_start` is incorrectly set to `2022-05-08T09:30:00+00:00`, 2 days before the date I choose.
### Operating System
Debian GNU/Linux 11 (bullseye)
### Versions of Apache Airflow Providers
N/A
### Deployment
Astronomer
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23689 | https://github.com/apache/airflow/pull/22658 | 026f1bb98cd05a26075bd4e4fb68f7c3860ce8db | d991d9800e883a2109b5523ae6354738e4ac5717 | 2022-05-12T22:29:26Z | python | 2022-08-16T13:26:00Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,688 | ["airflow/decorators/base.py", "tests/decorators/test_python.py"] | _TaskDecorator has no __wrapped__ attribute in v2.3.0 | ### Apache Airflow version
2.3.0 (latest released)
### What happened
I run a unit test on a task which is defined using the task decorator. In the unit test, I unwrap the task decorator with the `__wrapped__` attribute, but this no longer works in v2.3.0. It works in v2.2.5.
### What you think should happen instead
I expect the wrapped function to be returned. This was what occurred in v2.2.5
When running pytest on the airflow v2.3.0 the following error is thrown:
```AttributeError: '_TaskDecorator' object has no attribute '__wrapped__'```
### How to reproduce
Here's a rough outline of the code.
A module `hello.py` contains the task definition:
```
from airflow.decorators import task
@task
def hello_airflow():
print('hello airflow')
```
and the test contains
```
from hello import hello_airflow
def test_hello_airflow():
hello_airflow.__wrapped__()
```
Then run pytest
### Operating System
Rocky Linux 8.5 (Green Obsidian)
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23688 | https://github.com/apache/airflow/pull/23830 | a8445657996f52b3ac5ce40a535d9c397c204d36 | a71e4b789006b8f36cd993731a9fb7d5792fccc2 | 2022-05-12T22:12:05Z | python | 2022-05-23T01:24:52Z |
closed | apache/airflow | https://github.com/apache/airflow | 23,679 | ["airflow/config_templates/config.yml.schema.json", "airflow/configuration.py", "tests/config_templates/deprecated.cfg", "tests/config_templates/deprecated_cmd.cfg", "tests/config_templates/deprecated_secret.cfg", "tests/config_templates/empty.cfg", "tests/core/test_configuration.py", "tests/utils/test_config.py"] | exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of | ### Apache Airflow version
main (development)
### What happened
trying to run airflow tasks run command locally and force `StandardTaskRunner` to use `_start_by_exec` instead of `_start_by_fork`
```
airflow tasks run example_bash_operator also_run_this scheduled__2022-05-08T00:00:00+00:00 --job-id 237 --local --subdir /Users/ping_zhang/airlab/repos/airflow/airflow/example_dags/example_bash_operator.py -f -i
```
However, it always errors out:
see https://user-images.githubusercontent.com/8662365/168164336-a75bfac8-cb59-43a9-b9f3-0c345c5da79f.png
```
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this Traceback (most recent call last):
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/miniforge3/envs/apache-***/bin/***", line 33, in <module>
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this sys.exit(load_entry_point('apache-***', 'console_scripts', '***')())
[2022-05-12 12:08:32,893] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/__main__.py", line 38, in main
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this args.func(args)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/cli_parser.py", line 51, in command
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/cli.py", line 99, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return f(*args, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 369, in task_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ti, _ = _get_ti(task, args.execution_date_or_run_id, args.map_index, pool=args.pool)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/utils/session.py", line 71, in wrapper
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this return func(*args, session=session, **kwargs)
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 152, in _get_ti
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this dag_run, dr_created = _get_dag_run(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this File "/Users/ping_zhang/airlab/repos/***/***/cli/commands/task_command.py", line 112, in _get_dag_run
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this raise DagRunNotFound(
[2022-05-12 12:08:32,894] {base_task_runner.py:109} INFO - Job 265: Subtask also_run_this ***.exceptions.DagRunNotFound: DagRun for example_bash_operator with run_id or execution_date of 'scheduled__2022-05-08T00:00:00+00:00' not found
[2022-05-12 12:08:33,014] {local_task_job.py:163} INFO - Task exited with return code 1
[2022-05-12 12:08:33,048] {local_task_job.py:265} INFO - 0 downstream tasks scheduled from follow-on schedule check
[2022-05-12 12:11:30,742] {taskinstance.py:1120} INFO - Dependencies not met for <TaskInstance: example_bash_operator.also_run_this scheduled__2022-05-08T00:00:00+00:00 [running]>, dependency 'Task Instance Not Running' FAILED: Task is in the running state
[2022-05-12 12:11:30,743] {local_task_job.py:102} INFO - Task is not able to be run
```
i have checked the dag_run does exist in my db:

### What you think should happen instead
_No response_
### How to reproduce
pull the latest main branch with this commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
run airflow scheduler, webserver and worker locally with `CeleryExecutor`.
### Operating System
Apple M1 Max, version: 12.2
### Versions of Apache Airflow Providers
NA
### Deployment
Other
### Deployment details
on my local mac with latest main branch, latest commit: `7277122ae62305de19ceef33607f09cf030a3cd4`
### Anything else
Python version:
Python 3.9.7
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| https://github.com/apache/airflow/issues/23679 | https://github.com/apache/airflow/pull/23723 | ce8ea6691820140a0e2d9a5dad5254bc05a5a270 | 888bc2e233b1672a61433929e26b82210796fd71 | 2022-05-12T19:15:54Z | python | 2022-05-20T14:09:52Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.