Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
gargantext
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
humanities
gargantext
Commits
4f29821a
Commit
4f29821a
authored
Apr 08, 2016
by
delanoe
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[FIX] urls and todo added.
parent
0c6a4ea4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
32 additions
and
18 deletions
+32
-18
urls.py
scrapers/urls.py
+32
-18
No files found.
scrapers/urls.py
View file @
4f29821a
# ____ ____ ____ _ _ ____ _____ ____ _
#/ ___| / ___| _ \| || | | _ \___ /| _ \ | |
#\___ \| | | |_) | || |_| |_) ||_ \| |_) / __)
# ___) | |___| _ <|__ _| __/___) | _ <\__ \
#|____/ \____|_| \_\ |_| |_| |____/|_| \_( /
# |_|
#
# Scrapers == getting data from external databases
# Available databases :
## Pubmed
## IsTex,
## TODO CERN
from
django.conf.urls
import
url
import
scrapers.pubmed
as
pubmed
import
scrapers.istex
as
istex
# TODO
#import scrapers.cern as cern
#import scrapers.hal as hal
# TODO
#import scrapers.hal as hal
#import scrapers.revuesOrg as revuesOrg
#
Scraping : getting data from external database
#
Available databases : Pubmed, IsTex, (next: CERN)
#
TODO ?
#
REST API for the scrapers
# /!\ urls patterns here are *without* the trailing slash
urlpatterns
=
[
url
(
r'^pubmed/query$'
,
pubmed
.
query
)
,
url
(
r'^pubmed/save/(\d+)'
,
pubmed
.
save
)
urlpatterns
=
[
url
(
r'^pubmed/query$'
,
pubmed
.
query
)
,
url
(
r'^pubmed/save/(\d+)'
,
pubmed
.
save
)
,
url
(
r'^istex/query$'
,
istex
.
query
)
,
url
(
r'^istex/save/(\d+)'
,
istex
.
save
)
,
url
(
r'^istex/query$'
,
istex
.
query
)
,
url
(
r'^istex/save/(\d+)'
,
istex
.
save
)
# TODO REST API for the scrapers
#, url(r'^rest$' , scraping.Target.as_view()
)
,
# TODO
#, url(r'^cern/query$' , cern.query
)
#, url(r'^cern/save/(\d+)' , cern.save )
]
#def count(keywords):
# return 42
#
#def query_save(keywords):
# return 'path/to/query.xml'
#
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment