-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error 404 #7
Comments
Which docker image are you using? I am currently using this one https://hub.docker.com/r/savatar101/marker-api (0.3) and it works just fine! There the last update was 4 Months ago, so there were no recent changes. Can you please send me your obsidian configuration? When you built your own image like described here https://github.com/adithya-s-k/marker-api, which setup did you use? The simple server setup? |
So the docker image you sent has some notable differences, for instance the default port on the github build is now 8080. Also, I couldn't find the ability to run the server by calling "marker-api" in the CLI in this build. I'll try your image to see if it works. I indeed used the simple server setup. Can you give me some more specifics about the config info you need? Is it a particular file or do you want the entire folder? |
I just tried your docker image, and I get the same issue... |
Okay, this is weird, it should be working if you use the same image as I do. I also had a deeper look now and there is indeed a new Version and whole new Setup for the API part. However, the Endpoint shouldn't have changed. |
Let me know how the new setup works for you. I'm also having some permission issues running scripts with my new computer. I'll get these fixed just in case and try again. |
I'm sorry but I can't get the new version of the marker api to run on my device, there is also this issue: adithya-s-k/marker-api#20 and I have exactly the same problem, I tried everything recommended to make this work, but I couldn't build the docker image nor run the python server manually. But the image mentioned earlier works fine for me, so it should also work for you. Maybe try another port when you have running something else on 8080 and you could also have a look in the developer console of obsidian if there is a hint to what is happening I'll try again setting it up once the error gets fixed |
Fixed in V1.3.0! |
I recently got a new computer and I had to re-install marker API. I noticed some differences in the new build, but I was able to set up the docker local version with GPU support. However, when I use the plugin on Obsidian, I keep getting errors. Not sure if there were changes to the build that make the plugin deprecated, or maybe something else. Of note, I was able to run the local python server as well, but the issue persists. Please refer to the logs below.
docker run --gpus all -p 8080:8080 marker-api-gpu
==========
== CUDA ==
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
8888ba.88ba dP oo
88
8b
8b 8888 88 88 .d8888b. 88d888b. 88 .dP .d8888b. 88d888b. .d8888b. 88d888b. dP
88 88 88 88'
88 88'
88 88888" 88ooood8 88'88 88888888 88'
88 88'88 88 88 88 88 88. .88 88 88
8b. 88. ... 88 88. .88 88. .88 88dP dP dP
88888P8 dP dP
YP88888P' dP
88888P8 88Y888P' dP88
dP
Easily deployable and highly Scalable 🚀 API to convert PDF to markdown quickly with high accuracy.
Abstracted by Adithya S K : https://twitter.com/adithya_s_k
Loaded detection model vikp/surya_det3 on device cuda with dtype torch.float16
Loaded detection model vikp/surya_layout3 on device cuda with dtype torch.float16
Loaded reading order model vikp/surya_order on device cuda with dtype torch.float16
Loaded recognition model vikp/surya_rec2 on device cuda with dtype torch.float16
Loaded texify model to cuda with torch.float16 dtype
INFO: 172.17.0.1:51880 - "POST /convert HTTP/1.1" 404 Not Found
INFO: 172.17.0.1:56182 - "POST /convert HTTP/1.1" 404 Not Found
INFO: 172.17.0.1:41442 - "POST /convert HTTP/1.1" 404 Not Found
The text was updated successfully, but these errors were encountered: