Posted
over 3 years
ago
by
gsroot
When I run my flask app with worker-class=gevent on gunicorn, the server blocks.
gunicorn command
gunicorn app:app --workers=5 --worker-class=gevent --threads=5 --timeout=1800 --log-level=DEBUG
Source code
query = '...'
query_job =
... [More]
bigquery_client.query(query)\
query_job.to_dataframe()# to_dataframe function where block occurs
Source code where a block in the bigquery library occurs (lines 678 to 680 of the google/cloud/bigquery/_pandas_helpers.py file)
try:
frame = worker_queue.get(timeout=_PROGRESS_INTERVAL) # this point
yield frame
Python library version
python=3.7.10
gunicorn=20.1.0
gevnet=21.1.2
eventlet=0.30.2
google-cloud-bigquery=2.20.0
google-cloud-bigquery-storage=2.4.0
google-cloud-core=1.6.0
pyarrow=4.0.0
The same happens when worker-class is an eventlet. It does not occur when worker-class is gthread or sync.
The block in _pandas_helpers.py is executed in the following syntax, is it a thread-related problem?
with concurrent.futures.ThreadPoolExecutor(max_workers=total_streams) as pool:
Why do blocks happen?
[Less]
|
Posted
over 3 years
ago
by
Mainul
I used flask-socketio to implement websocket. I used the eventlet/socketio sleep() function, which supposed to work asynchronously. And this is working in my custom event but when it is under the connect event, it is not working. Here is
... [More]
the reproducible code:
Backend code:
from flask import Flask
from flask_socketio import SocketIO,emit,send
from eventlet import sleep
app = Flask(__name__)
socketio = SocketIO(app)
@app.route('/socket',methods=['POST', 'GET'])
def socket():
"""Landing page."""
return app.send_static_file('websocket_test.html')
@socketio.on('connect', namespace='/test')
def test_connect(auth):
for i in range(10):
emit('my_response',
{'data': str(i)})
##This does not work asynchronously
socketio.sleep(1)
@socketio.on('my_event', namespace='/test')
def test_message(message):
emit('my_response',
{'data': message['dataaa']})
for i in range(10):
# emit('my_response', {'data': str(i)})
emit('my_response',
{'data': message['dataaa']})
##But This work asynchronously
socketio.sleep(1)
if __name__ == '__main__':
socketio.run(app)
websocket_test.html:
Socket-Test
Socket
Logs
[Less]
|
Posted
over 3 years
ago
by
jmunsch
This is just a long way of asking: "How does sync_to_async work with blocking IO, and gevent/psycogreen"?
For example:
from myapp.models import SomeModel
from asgiref.sync import sync_to_async
from gevent.threadpool import
... [More]
ThreadPoolExecutor as GThreadPoolExecutor
conf = {
"thread_sensitive": False,
"executor": GThreadPoolExecutor(max_workers=1)
}
await sync_to_async(SomeModel.objects.all, **conf)()
The third kwarg that can be passed to asgiref's sync_to_async is an executor the executor is a type of concurrent.futures.ThreadPoolExecutor
According to the documentation gevent.threadpool.ThreadPoolExecutor
more or less inherits and wraps concurrent.futures.ThreadPoolExecutor
Say for example I want to use a werkzeug DispatcherMiddleware, and wrap an ASGI app.
Think FastAPI mounted to the inside of an older monolithic django WSGI app ( using eventlet / gevent / psycogreen / monkey patching )
Here's my attempt at doing it.
https://github.com/rednaks/django-async-orm/pull/7/files
Basically, how to get django async-ish ORM?
try:
from gevent.threadpool import ThreadPoolExecutor as GThreadPoolExecutor
from django.conf import settings
if settings.GEVENT_DJANGO_ASYNC_ORM:
from gevent import monkey
monkey.patch_all()
def monkey_patch_the_monkey_patchers(ex):
from .patch_gevent import _FutureProxy
def submit(ex, fn, *args, **kwargs): # pylint:disable=arguments-differ
print(fn, *args, **kwargs)
with ex._shutdown_lock: # pylint:disable=not-context-manager
if ex._shutdown:
raise RuntimeError('cannot schedule new futures after shutdown')
future = ex._threadpool.spawn(fn, *args, **kwargs)
proxy_future = _FutureProxy(future)
proxy_future.__class__ = concurrent.futures.Future
return proxy_future
ex.submit = submit
return ex
MonkeyPoolExecutor = monkey_patch_the_monkey_patchers(GThreadPoolExecutor)
conf = {"thread_sensitive": False, "executor": MonkeyPoolExecutor(max_workers=1)}
executor_ = MonkeyPoolExecutor
except Exception as e:
print(e)
print('defaulting django_async_orm')
pass
https://github.com/rednaks/django-async-orm/discussions/9
https://github.com/abersheeran/a2wsgi
related:
django 3.0 async orm
with concurrent.futures.ThreadPoolExecutor() as executor: ... does not wait
[Less]
|
Posted
over 3 years
ago
by
jmunsch
The third kwarg that can be passed to asgiref's sync_to_async is an executor the executor is a type of concurrent.futures.ThreadPoolExecutor
According to the documentation gevent.threadpool.ThreadPoolExecutor
more or less inherits and
... [More]
wraps concurrent.futures.ThreadPoolExecutor
Say for example I want to use a werkzeug DispatcherMiddleware, and wrap an ASGI app.
Think FastAPI mounted to the inside of an older monolithic django WSGI app ( using eventlet / gevent / monkey patching )
Here's my attempt at doing it.
https://github.com/rednaks/django-async-orm/pull/7/files
Basically, how to get django async-ish ORM?
try:
from gevent.threadpool import ThreadPoolExecutor as GThreadPoolExecutor
from django.conf import settings
if settings.GEVENT_DJANGO_ASYNC_ORM:
from gevent import monkey
monkey.patch_all()
def monkey_patch_the_monkey_patchers(ex):
from .patch_gevent import _FutureProxy
def submit(ex, fn, *args, **kwargs): # pylint:disable=arguments-differ
print(fn, *args, **kwargs)
with ex._shutdown_lock: # pylint:disable=not-context-manager
if ex._shutdown:
raise RuntimeError('cannot schedule new futures after shutdown')
future = ex._threadpool.spawn(fn, *args, **kwargs)
proxy_future = _FutureProxy(future)
proxy_future.__class__ = concurrent.futures.Future
return proxy_future
ex.submit = submit
return ex
MonkeyPoolExecutor = monkey_patch_the_monkey_patchers(GThreadPoolExecutor)
conf = {"thread_sensitive": False, "executor": MonkeyPoolExecutor(max_workers=1)}
executor_ = MonkeyPoolExecutor
except Exception as e:
print(e)
print('defaulting django_async_orm')
pass
And then called like:
await sync_to_async(SomeModel.objects.all, **conf)()
https://github.com/rednaks/django-async-orm/discussions/9
https://github.com/abersheeran/a2wsgi
related:
django 3.0 async orm
with concurrent.futures.ThreadPoolExecutor() as executor: ... does not wait
[Less]
|
Posted
over 3 years
ago
by
Mark7888
I had a working code running flask with gunicorn (eventlet worker) in docker. It's also working in production, but on my machine, it started doing this. I can't find anything on google about this thing. What meight be the problem?
Error:
... [More]
class uri 'eventlet' invalid or not found:
web_1 |
web_1 | [Traceback (most recent call last):
web_1 | File "/root/.local/lib/python3.7/site-packages/gunicorn/util.py", line 99, in load_class
web_1 | mod = importlib.import_module('.'.join(components))
web_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package, level)
web_1 | File "", line 1006, in _gcd_import
web_1 | File "", line 983, in _find_and_load
web_1 | File "", line 967, in _find_and_load_unlocked
web_1 | File "", line 677, in _load_unlocked
web_1 | File "", line 728, in exec_module
web_1 | File "", line 219, in _call_with_frames_removed
web_1 | File "/root/.local/lib/python3.7/site-packages/gunicorn/workers/geventlet.py", line 20, in
web_1 | from eventlet.wsgi import ALREADY_HANDLED as EVENTLET_ALREADY_HANDLED
web_1 | ImportError: cannot import name 'ALREADY_HANDLED' from 'eventlet.wsgi' (/root/.local/lib/python3.7/site-packages/eventlet/wsgi.py)
web_1 | ]
[Less]
|
Posted
over 3 years
ago
by
Marcos Álvarez
I am developing my final degree project and I am facing some problems with Python, Flask, socketIO and background threads.
My solution takes some files as input, process them, makes some calculations, and generates an image and a CSV
... [More]
file. Those files are then uploaded to some storage service. I want to make the processing of the files on a background thread and notify my clients (web, Android, and iOS) using websockets. Right now, I am using flask-socketIO with eventlet as the async_mode of my socket. When a client uploads the files, the process is started in a background thread (using socketio.start_background_task) but that heavy process (takes about 30 minutes to end) seems to take control of the main thread, as a result when I try to make an HTTP request to the server, the response is loading infinitely.
I would like to know if there is a way to make this work using eventlet or maybe using another different approach.
Thank you in advance.
[Less]
|
Posted
over 3 years
ago
by
Shivanand T
I am running a django application in the backend and have gunicorn + nginx in front of it.
in /etc/systemd/system/gunicorn.service
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/dev/backend/RecruitRest/recruit
... [More]
ExecStart=/home/user/miniconda3/envs/hire10x/bin/gunicorn --access-logfile /home/shivanand3939/dev/gunicorn.access.log --error-logfile /home/shivanand3939/dev/gunicorn.error.log --timeout 1000 -k eventlet --workers 1 --bind unix:/home/shivanand3939/dev/backend/RecruitRest/recruit/recruit.sock recruit.wsgi:application
This throws a CORS error as worker-class is eventlet,
but if remove that part i.e if I remove "-k eventlet" from the above code, my application runs fine
the location part of my nginx file is like this:
location / {
#include proxy_params;
# proxy_pass http://unix:/home/shivanand3939/prod/RecruitRest/recruit/gunicorn.sock;
# proxy_pass http://127.0.0.1:8000/;
add_header Cache-Control private;
add_header Cache-Control no-cache;
add_header Cache-Control no-store;
add_header Cache-Control must-revalidate;
add_header Pragma no-cache;
client_max_body_size 100M;
proxy_buffering off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
.... }
I have added some of the rules like turning off proxy_buffering as mentioned in the document: http://docs.gunicorn.org/_/downloads/en/19.x/pdf/
page 35
I need to run the gunicorn in async mode (as I have some blocking requests) and not in sync mode. What change do I need to make
[Less]
|
Posted
over 3 years
ago
by
Shivanand T
I am running a django application in the backend and have gunicorn + nginx in front of it.
in /etc/systemd/system/gunicorn.service
[Service]
User=user
Group=www-data
WorkingDirectory=/home/user/dev/backend/RecruitRest/recruit
... [More]
ExecStart=/home/user/miniconda3/envs/hire10x/bin/gunicorn --access-logfile /home/shivanand3939/dev/gunicorn.access.log --error-logfile /home/shivanand3939/dev/gunicorn.error.log --timeout 1000 -k eventlet --workers 1 --bind unix:/home/shivanand3939/dev/backend/RecruitRest/recruit/recruit.sock recruit.wsgi:application
This throws a CORS error as worker-class is eventlet,
but if remove that part i.e if I remove "-k eventlet" from the above code, my application runs fine
the location part of my nginx file is like this:
location / {
#include proxy_params;
# proxy_pass http://unix:/home/shivanand3939/prod/RecruitRest/recruit/gunicorn.sock;
# proxy_pass http://127.0.0.1:8000/;
add_header Cache-Control private;
add_header Cache-Control no-cache;
add_header Cache-Control no-store;
add_header Cache-Control must-revalidate;
add_header Pragma no-cache;
client_max_body_size 100M;
proxy_buffering off;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
.... }
I have added some of the rules like turning off proxy_buffering as mentioned in the document: http://docs.gunicorn.org/_/downloads/en/19.x/pdf/
page 35
I need to run the gunicorn in async mode (as I have some blocking requests) and not in sync mode. What change do I need to make
[Less]
|
Posted
almost 4 years
ago
by
mdargacz
I'm facing an issue related to using an active I/O connection in the SigTerm handler using gunicorn eventlet server.
server.py
def exit_with_grace(*args):
conn = get_redis_connection()
conn.set('exited_gracefully', True)
... [More]
signal.signal(signal.SIGTERM, exit_with_grace)
I also tried to fire up the celery task (using amqp broker) but all my ideas failed. When I start server in debug mode using python server.py it works perfectly. Gunicorn+ eventlet does not allow to connect to redis in sigterm handler, resulting with an following error:
Traceback (most recent call last):
File "/project/handlers/socketio/redis_context_backend.py", line 256, in publish_pattern
return conn.publish(pattern, serialized)
File "/project/venv/lib/python3.6/site-packages/redis/client.py", line 3098, in publish
return self.execute_command('PUBLISH', channel, message)
File "/project/venv/lib/python3.6/site-packages/redis/client.py", line 898, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 1192, in get_connection
connection.connect()
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 559, in connect
sock = self._connect()
File "/project/venv/lib/python3.6/site-packages/redis/connection.py", line 603, in _connect
sock.connect(socket_address)
File "/project/venv/lib/python3.6/site-packages/eventlet/greenio/base.py", line 250, in connect
self._trampoline(fd, write=True)
File "/project/venv/lib/python3.6/site-packages/eventlet/greenio/base.py", line 210, in _trampoline
mark_as_closed=self._mark_as_closed)
File "/project/venv/lib/python3.6/site-packages/eventlet/hubs/__init__.py", line 142, in trampoline
assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'
Gunicorn command:
gunicorn --worker-class eventlet -w 1 server:ws --reload -b localhost:5001
[Less]
|
Posted
almost 4 years
ago
by
mdargacz
I'm facing an issue related to using an active I/O connection in the SigTerm handler using gunicorn eventlet server.
server.py
def exit_with_grace(*args):
conn = get_redis_connection()
conn.set('exited_gracefully', True)
... [More]
signal.signal(signal.SIGTERM, exit_with_grace)
I also tried to fire up the celery task (using amqp broker) but all my ideas failed. When I start server in debug mode using python server.py it works perfectly. I suspect that gunicorn is just killing the processes prematurely before the code gets run.
Gunicorn command:
gunicorn --worker-class eventlet -w 1 server:ws --reload -b localhost:5001
[Less]
|