Commit ea1172a4 authored by delanoe's avatar delanoe

Merge branch 'testing' into stable-merge

parents fd2cbb52 8a68083e
# Definitions and notation for the documentation (!= python notation)
## Node
The table (nodes) is a list of nodes: [Node]
Each Node has:
- a typename
- a parent_id
- a name
### Each Node has a parent_id
Node A
├── Node B
└── Node C
If Node A is Parent of Node B and Node C
then NodeA.id == NodeB.parent_id == NodeC.parent_id.
### Each Node has a typename
Notation: Node[foo](bar) is a Node of typename "foo" and with name "bar".
Then:
- Then Node[project] is a project.
- Then Node[corpus] is a corpus.
- Then Node[document] is a document.
### Each Node as a typename and a parent
Node[user](name)
├── Node[project](myProject1)
│   ├── Node[corpus](myCorpus1)
│   ├── Node[corpus](myCorpus2)
│   └── Node[corpus](myCorpus3)
└── Node[project](myProject2)
/!\ 3 way to manage rights of the Node:
1) Then Node[User] is a folder containing all User projects and corpus and
documents (i.e. Node[user] is the parent_id of the children).
2) Each node as a user_id (mainly used today)
3) Right management for the groups (implemented already but not
used since not connected to the frontend).
## Global Parameters
Global User is Gargantua (Node with typename User).
This node is the parent of the others Nodes for parameters.
Node[user](gargantua) (gargantua.id == Node[user].user_id)
├── Node[TFIDF-Global](global) : without group
│   ├── Node[tfidf](database1)
│   ├── Node[tfidf](database2)
│   └── Node[tfidf](database2)
└── Node[anotherMetric](global)
## NodeNgram
NodeNgram is a relation of a Node with a ngram:
- document and ngrams
- metrics and ngrams (position of the node metrics indicates the
context)
# Community Parameters
# User Parameters
...@@ -8,6 +8,9 @@ Gargantext is a web plateform to explore your corpora using text-mining[...](abo ...@@ -8,6 +8,9 @@ Gargantext is a web plateform to explore your corpora using text-mining[...](abo
* [Take a tour](demo.md) of the different features offered by Gargantext * [Take a tour](demo.md) of the different features offered by Gargantext
## Architecture
* [Architecture](architecture.md) Architecture of Gargantext
##Need some help? ##Need some help?
Ask the community at: Ask the community at:
......
* Create user gargantua
Main user of Gargantext is Gargantua (role of Pantagruel soon)!
``` bash
sudo adduser --disabled-password --gecos "" gargantua
```
* Create the directories you need
here for the example gargantext package will be installed in /srv/
``` bash
for dir in "/srv/gargantext"
"/srv/gargantext_lib"
"/srv/gargantext_static"
"/srv/gargantext_media"
"/srv/env_3-5"; do
sudo mkdir -p $dir ;
sudo chown gargantua:gargantua $dir ;
done
```
You should see:
```bash
$tree /srv
/srv
├── gargantext
├── gargantext_lib
├── gargantext_media
│   └── srv
│   └── env_3-5
└── gargantext_static
```
* Get the main libraries
Download uncompress and make main user access to it.
PLease, Be patient due to the size of the packages libraries (27GO)
this step can be long....
``` bash
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo "Libs installed"
```
* Get the source code of Gargantext
by cloning the repository of gargantext
``` bash
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin refactoring \
&& git checkout refactoring \
```
TODO(soon): git clone https://gogs.iscpif.fr/gargantext.git
See the [next steps of installation procedure](install.md#Install)
tools/manual_install.md
\ No newline at end of file
...@@ -240,7 +240,7 @@ RESOURCETYPES = [ ...@@ -240,7 +240,7 @@ RESOURCETYPES = [
'crawler': None, 'crawler': None,
}, },
{ "type": 9, { "type": 9,
"name": 'SCOAP [CRAWLER/XML]', "name": 'SCOAP [API/XML]',
"parser": "CernParser", "parser": "CernParser",
"format": 'MARC21', "format": 'MARC21',
'file_formats':["zip","xml"], 'file_formats':["zip","xml"],
...@@ -255,7 +255,7 @@ RESOURCETYPES = [ ...@@ -255,7 +255,7 @@ RESOURCETYPES = [
# }, # },
# #
{ "type": 10, { "type": 10,
"name": 'REPEC [CRAWLER]', "name": 'REPEC [MULTIVAC API]',
"parser": "MultivacParser", "parser": "MultivacParser",
"format": 'JSON', "format": 'JSON',
'file_formats':["zip","json"], 'file_formats':["zip","json"],
...@@ -263,13 +263,21 @@ RESOURCETYPES = [ ...@@ -263,13 +263,21 @@ RESOURCETYPES = [
}, },
{ "type": 11, { "type": 11,
"name": 'HAL [CRAWLER]', "name": 'HAL [API]',
"parser": "HalParser", "parser": "HalParser",
"format": 'JSON', "format": 'JSON',
'file_formats':["zip","json"], 'file_formats':["zip","json"],
"crawler": "HalCrawler", "crawler": "HalCrawler",
}, },
{ "type": 12,
"name": 'ISIDORE [SPARQLE API /!\ BETA]',
"parser": "IsidoreParser",
"format": 'JSON',
'file_formats':["zip","json"],
"crawler": "IsidoreCrawler",
},
] ]
#shortcut for resources declaration in template #shortcut for resources declaration in template
PARSERS = [(n["type"],n["name"]) for n in RESOURCETYPES if n["parser"] is not None] PARSERS = [(n["type"],n["name"]) for n in RESOURCETYPES if n["parser"] is not None]
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ****************************
# **** ISIDORE Scrapper ***
# ****************************
# CNRS COPYRIGHTS
# SEE LEGAL LICENCE OF GARGANTEXT.ORG
from ._Crawler import *
import json
from gargantext.constants import UPLOAD_DIRECTORY
from math import trunc
from gargantext.util.files import save
from gargantext.util.crawlers.sparql.bool2sparql import bool2sparql, isidore
class IsidoreCrawler(Crawler):
''' ISIDORE SPARQL API CLIENT'''
def __init__(self):
# Main EndPoints
self.BASE_URL = "https://www.rechercheisidore.fr"
self.API_URL = "sparql"
# Final EndPoints
# TODO : Change endpoint according type of database
self.URL = self.BASE_URL + "/" + self.API_URL
self.status = []
def __format_query__(self, query=None, count=False, offset=None, limit=None):
'''formating the query'''
return (bool2sparql(query, count=count, offset=offset, limit=limit))
def _get(self, query, offset=0, limit=None, lang=None):
'''Parameters to download data'''
isidore(query, count=False, offset=offset, limit=limit)
def scan_results(self, query):
'''
scan_results : Returns the number of results
Query String -> Int
'''
self.results_nb = [n for n in isidore(query, count=True)][0]
return self.results_nb
def download(self, query):
downloaded = False
self.status.append("fetching results")
corpus = []
limit = 1000
self.query_max = self.scan_results(query)
print("self.query_max : %s" % self.query_max)
if self.query_max > QUERY_SIZE_N_MAX:
msg = "Invalid sample size N = %i (max = %i)" % ( self.query_max
, QUERY_SIZE_N_MAX
)
print("WARNING (scrap: ISIDORE d/l ): " , msg)
self.query_max = QUERY_SIZE_N_MAX
for offset in range(0, self.query_max, limit):
print("Downloading result %s to %s" % (offset, self.query_max))
for doc in isidore(query, offset=offset, limit=limit) :
corpus.append(doc)
self.path = save( json.dumps(corpus).encode("utf-8")
, name='ISIDORE.json'
, basedir=UPLOAD_DIRECTORY
)
downloaded = True
return downloaded
import subprocess
import re
from .sparql import Service
#from sparql import Service
def bool2sparql(rawQuery, count=False, offset=None, limit=None):
"""
bool2sparql :: String -> Bool -> Int -> String
Translate a boolean query into a Sparql request
You need to build bool2sparql binaries before
See: https://github.com/delanoe/bool2sparql
"""
query = re.sub("\"", "\'", rawQuery)
bashCommand = ["/srv/gargantext/gargantext/util/crawlers/sparql/bool2sparql-exe","-q",query]
if count is True :
bashCommand.append("-c")
else :
if offset is not None :
for command in ["--offset", str(offset)] :
bashCommand.append(command)
if limit is not None :
for command in ["--limit", str(limit)] :
bashCommand.append(command)
process = subprocess.Popen(bashCommand, stdout=subprocess.PIPE)
output, error = process.communicate()
if error is not None :
raise(error)
else :
print(output)
return(output.decode("utf-8"))
def isidore(query, count=False, offset=None, limit=None):
"""
isidore :: String -> Bool -> Int -> Either (Dict String) Int
use sparql-client either to search or to scan
"""
query = bool2sparql(query, count=count, offset=offset, limit=limit)
go = Service("https://www.rechercheisidore.fr/sparql/", "utf-8", "GET")
results = go.query(query)
if count is False:
for r in results:
doc = dict()
doc_values = dict()
doc["url"], doc["title"], doc["date"], doc["abstract"], doc["source"] = r
for k in doc.keys():
doc_values[k] = doc[k].value
yield(doc_values)
else :
count = []
for r in results:
n, = r
count.append(int(n.value))
yield count[0]
def test():
query = "delanoe"
limit = 100
offset = 10
for d in isidore(query, offset=offset, limit=limit):
print(d["date"])
#print([n for n in isidore(query, count=True)])
if __name__ == '__main__':
test()
This diff is collapsed.
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
# **************************** # ****************************
# **** HAL Parser *** # **** HAL Parser ***
# **************************** # ****************************
# CNRS COPYRIGHTS # CNRS COPYRIGHTS 2017
# SEE LEGAL LICENCE OF GARGANTEXT.ORG # SEE LEGAL LICENCE OF GARGANTEXT.ORG
from ._Parser import Parser from ._Parser import Parser
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ****************************
# **** ISIDORE Parser ***
# ****************************
# CNRS COPYRIGHTS
# SEE LEGAL LICENCE OF GARGANTEXT.ORG
from ._Parser import Parser
from datetime import datetime
import json
class IsidoreParser(Parser):
def parse(self, filebuf):
'''
parse :: FileBuff -> [Hyperdata]
'''
contents = filebuf.read().decode("UTF-8")
data = json.loads(contents)
filebuf.close()
json_docs = data
hyperdata_list = []
hyperdata_path = { "title" : "title"
, "abstract" : "abstract"
, "authors" : "authors"
, "url" : "url"
, "source" : "source"
}
uniq_id = set()
for doc in json_docs:
hyperdata = {}
for key, path in hyperdata_path.items():
hyperdata[key] = doc.get(path, "")
if hyperdata["url"] not in uniq_id:
# Removing the duplicates implicitly
uniq_id.add(hyperdata["url"])
# Source is the Journal Name
hyperdata["source"] = doc.get("source", "ISIDORE Database")
# Working on the date
maybeDate = doc.get("date" , None)
if maybeDate is None:
date = datetime.now()
else:
try :
# Model of date: 1958-01-01T00:00:00
date = datetime.strptime(maybeDate, '%Y-%m-%dT%H:%M:%S')
except :
print("FIX DATE ISIDORE please >%s<" % maybeDate)
date = datetime.now()
hyperdata["publication_date"] = date
hyperdata["publication_year"] = str(date.year)
hyperdata["publication_month"] = str(date.month)
hyperdata["publication_day"] = str(date.day)
hyperdata_list.append(hyperdata)
return hyperdata_list
...@@ -175,7 +175,6 @@ def parse(corpus): ...@@ -175,7 +175,6 @@ def parse(corpus):
hyperdata = hyperdata, hyperdata = hyperdata,
) )
session.add(document) session.add(document)
session.commit()
documents_count += 1 documents_count += 1
if pending_add_error_stats: if pending_add_error_stats:
...@@ -190,6 +189,9 @@ def parse(corpus): ...@@ -190,6 +189,9 @@ def parse(corpus):
session.add(corpus) session.add(corpus)
session.commit() session.commit()
# Commit any pending document
session.commit()
# update info about the resource # update info about the resource
resource['extracted'] = True resource['extracted'] = True
#print( "resource n°",i, ":", d, "docs inside this file") #print( "resource n°",i, ":", d, "docs inside this file")
......
#!/bin/bash
### Update and install base dependencies ### Update and install base dependencies
echo "############ DEBIAN LIBS ###############" echo "############ DEBIAN LIBS ###############"
apt-get update && \ apt-get update && \
...@@ -32,26 +34,26 @@ update-locale LC_ALL=fr_FR.UTF-8 ...@@ -32,26 +34,26 @@ update-locale LC_ALL=fr_FR.UTF-8
libxml2-dev xml-core libgfortran-6-dev \ libxml2-dev xml-core libgfortran-6-dev \
libpq-dev \ libpq-dev \
python3.5 \ python3.5 \
python3-dev \ python3.5-dev \
python3-six python3-numpy python3-setuptools \ python3-six python3-numpy python3-setuptools \
python3-numexpr \ python3-numexpr \
python3-pip \ python3-pip \
libxml2-dev libxslt-dev zlib1g-dev libxml2-dev libxslt-dev zlib1g-dev libigraph0-dev
#libxslt1-dev #libxslt1-dev
UPDATE AND CLEAN # UPDATE AND CLEAN
apt-get update && apt-get autoclean apt-get update && apt-get autoclean
#NB: removing /var/lib will avoid to significantly fill up your /var/ folder on your native system #NB: removing /var/lib will avoid to significantly fill up your /var/ folder on your native system
######################################################################## ########################################################################
### PYTHON ENVIRONNEMENT (as ROOT) ### PYTHON ENVIRONNEMENT (as ROOT)
######################################################################## ########################################################################
#adduser --disabled-password --gecos "" gargantua #adduser --disabled-password --gecos "" gargantua
cd /srv/ cd /srv/
pip3 install virtualenv pip3 install virtualenv
virtualenv /srv/env_3-5 virtualenv /srv/env_3-5 -p /usr/bin/python3.5
echo 'alias venv="source /srv/env_3-5/bin/activate"' >> ~/.bashrc echo 'alias venv="source /srv/env_3-5/bin/activate"' >> ~/.bashrc
# CONFIG FILES # CONFIG FILES
...@@ -60,9 +62,9 @@ update-locale LC_ALL=fr_FR.UTF-8 ...@@ -60,9 +62,9 @@ update-locale LC_ALL=fr_FR.UTF-8
source /srv/env_3-5/bin/activate && pip3 install -r /srv/gargantext/install/gargamelle/requirements.txt && \ source /srv/env_3-5/bin/activate && pip3 install -r /srv/gargantext/install/gargamelle/requirements.txt && \
pip3 install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1 && \ pip3 install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1 && \
python3 -m nltk.downloader averaged_perceptron_tagger -d /usr/local/share/nltk_data python3 -m nltk.downloader averaged_perceptron_tagger -d /usr/local/share/nltk_data
chown gargantua:gargantua -R /srv/env_3-5 chown gargantua:gargantua -R /srv/env_3-5
####################################################################### #######################################################################
## POSTGRESQL DATA (as ROOT) ## POSTGRESQL DATA (as ROOT)
####################################################################### #######################################################################
......
...@@ -14,7 +14,7 @@ echo "::::: DJANGO :::::" ...@@ -14,7 +14,7 @@ echo "::::: DJANGO :::::"
/bin/su gargantua -c 'source /env_3-5/bin/activate &&\ su gargantua -c 'source /srv/env_3-5/bin/activate &&\
echo "Activated env" &&\ echo "Activated env" &&\
/srv/gargantext/manage.py makemigrations &&\ /srv/gargantext/manage.py makemigrations &&\
/srv/gargantext/manage.py migrate && \ /srv/gargantext/manage.py migrate && \
...@@ -24,4 +24,4 @@ echo "::::: DJANGO :::::" ...@@ -24,4 +24,4 @@ echo "::::: DJANGO :::::"
/srv/gargantext/dbmigrate.py && \ /srv/gargantext/dbmigrate.py && \
/srv/gargantext/manage.py createsuperuser' /srv/gargantext/manage.py createsuperuser'
/usr/sbin/service postgresql stop service postgresql stop
##
# You should look at the following URL's in order to grasp a solid understanding
# of Nginx configuration files in order to fully unleash the power of Nginx.
# http://wiki.nginx.org/Pitfalls
# http://wiki.nginx.org/QuickStart
# http://wiki.nginx.org/Configuration
#
# Generally, you will want to move this file somewhere, and start with a clean
# file but keep this around for reference. Or just disable in sites-enabled.
#
# Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples.
##
# the upstream component nginx needs to connect to
upstream gargantext {
server unix:///tmp/gargantext.sock; # for a file socket
#server 127.0.0.1:8001; # for a web port socket (we'll use this first)
}
# Default server configuration
#
server {
listen 80 default_server;
listen [::]:80 default_server;
# SSL configuration
#
# listen 443 ssl default_server;
# listen [::]:443 ssl default_server;
#
# Note: You should disable gzip for SSL traffic.
# See: https://bugs.debian.org/773332
#
# Read up on ssl_ciphers to ensure a secure configuration.
# See: https://bugs.debian.org/765782
#
# Self signed certs generated by the ssl-cert package
# Don't use them in a production server!
#
# include snippets/snakeoil.conf;
client_max_body_size 800M;
client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
root /var/www/html;
# Add index.php to the list if you are using PHP
#index index.html index.htm index.nginx-debian.html;
server_name _ stable.gargantext.org gargantext.org ;
# Django media
location /media {
alias /var/www/gargantext/media; # your Django project's media files - amend as required
}
location /static {
alias /srv/gargantext_static; # your Django project's static files - amend as required
}
# Finally, send all non-media requests to the Django server.
location / {
uwsgi_pass gargantext;
include uwsgi_params;
}
#access_log off;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
server {
listen 80 ;
listen [::]:80;
server_name dl.gargantext.org ;
error_page 404 /index.html;
location / {
root /var/www/dl ;
proxy_set_header Host $host;
proxy_buffering off;
}
access_log /var/log/nginx/dl.gargantext.org-access.log;
error_log /var/log/nginx/dl.gargantext.org-error.log;
}
# try bottleneck # try bottleneck
eventlet==0.20.1
amqp==1.4.9 amqp==1.4.9
anyjson==0.3.3 anyjson==0.3.3
billiard==3.3.0.23 billiard==3.3.0.23
......
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# ****************************
# ***** ISIDORE Crawler *****
# ****************************
RESOURCE_TYPE_ISIDORE = 12
from django.shortcuts import redirect, render
from django.http import Http404, HttpResponseRedirect, HttpResponseForbidden
from gargantext.constants import get_resource, load_crawler, QUERY_SIZE_N_MAX
from gargantext.models.nodes import Node
from gargantext.util.db import session
from gargantext.util.db_cache import cache
from gargantext.util.http import JsonHttpResponse
from gargantext.util.scheduling import scheduled
from gargantext.util.toolchain import parse_extract_indexhyperdata
def query( request):
'''get GlobalResults()'''
if request.method == "POST":
query = request.POST["query"]
source = get_resource(RESOURCE_TYPE_ISIDORE)
if source["crawler"] is not None:
crawlerbot = load_crawler(source)()
#old raw way to get results_nb
results = crawlerbot.scan_results(query)
#ids = crawlerbot.get_ids(query)
return JsonHttpResponse({"results_nb":crawlerbot.results_nb})
def save(request, project_id):
'''save'''
if request.method == "POST":
query = request.POST.get("query")
try:
N = int(request.POST.get("N"))
except:
N = 0
print(query, N)
#for next time
#ids = request.POST["ids"]
source = get_resource(RESOURCE_TYPE_ISIDORE)
if N == 0:
raise Http404()
if N > QUERY_SIZE_N_MAX:
N = QUERY_SIZE_N_MAX
try:
project_id = int(project_id)
except ValueError:
raise Http404()
# do we have a valid project?
project = session.query( Node ).filter(Node.id == project_id).first()
if project is None:
raise Http404()
user = cache.User[request.user.id]
if not user.owns(project):
return HttpResponseForbidden()
# corpus node instanciation as a Django model
corpus = Node(
name = query,
user_id = request.user.id,
parent_id = project_id,
typename = 'CORPUS',
hyperdata = { "action" : "Scrapping data"
, "language_id" : "fr"
}
)
#download_file
crawler_bot = load_crawler(source)()
#for now no way to force downloading X records
#the long running command
filename = crawler_bot.download(query)
corpus.add_resource(
type = source["type"]
#, name = source["name"]
, path = crawler_bot.path
)
session.add(corpus)
session.commit()
#corpus_id = corpus.id
try:
scheduled(parse_extract_indexhyperdata)(corpus.id)
except Exception as error:
print('WORKFLOW ERROR')
print(error)
try:
print_tb(error.__traceback__)
except:
pass
# IMPORTANT ---------------------------------
# sanitize session after interrupted transact
session.rollback()
# --------------------------------------------
return render(
template_name = 'pages/projects/wait.html',
request = request,
context = {
'user' : request.user,
'project': project,
},
)
data = [query_string,query,N]
print(data)
return JsonHttpResponse(data)
...@@ -10,19 +10,15 @@ ...@@ -10,19 +10,15 @@
# moissonneurs == getting data from external databases # moissonneurs == getting data from external databases
# Available databases :
## Pubmed
## IsTex,
## CERN
from django.conf.urls import url from django.conf.urls import url
# Available databases :
import moissonneurs.pubmed as pubmed import moissonneurs.pubmed as pubmed
import moissonneurs.istex as istex import moissonneurs.istex as istex
import moissonneurs.cern as cern import moissonneurs.cern as cern
import moissonneurs.multivac as multivac import moissonneurs.multivac as multivac
import moissonneurs.hal as hal import moissonneurs.hal as hal
import moissonneurs.isidore as isidore
# TODO : ISIDORE # TODO : ISIDORE
...@@ -42,7 +38,7 @@ urlpatterns = [ url(r'^pubmed/query$' , pubmed.query ) ...@@ -42,7 +38,7 @@ urlpatterns = [ url(r'^pubmed/query$' , pubmed.query )
, url(r'^hal/query$' , hal.query ) , url(r'^hal/query$' , hal.query )
, url(r'^hal/save/(\d+)' , hal.save ) , url(r'^hal/save/(\d+)' , hal.save )
#, url(r'^isidore/query$' , isidore.query ) , url(r'^isidore/query$' , isidore.query )
#, url(r'^isidore/save/(\d+)' , isidore.save ) , url(r'^isidore/save/(\d+)' , isidore.save )
] ]
...@@ -367,7 +367,7 @@ ...@@ -367,7 +367,7 @@
<p> <p>
Gargantext Gargantext
<span class="glyphicon glyphicon-registration-mark" aria-hidden="true"></span> <span class="glyphicon glyphicon-registration-mark" aria-hidden="true"></span>
, version 3.0.6.8, , version 3.0.6.9.4,
<a href="http://www.cnrs.fr" target="blank" title="Institution that enables this project."> <a href="http://www.cnrs.fr" target="blank" title="Institution that enables this project.">
Copyrights Copyrights
<span class="glyphicon glyphicon-copyright-mark" aria-hidden="true"></span> <span class="glyphicon glyphicon-copyright-mark" aria-hidden="true"></span>
......
...@@ -41,39 +41,42 @@ ...@@ -41,39 +41,42 @@
<div class="container theme-showcase" role="main"> <div class="container theme-showcase" role="main">
<div class="jumbotron"> <div class="jumbotron">
<div class="row"> <div class="row">
<div class="col-md-4"> <div class="col-md-4">
<h1> <h1>
<span class="glyphicon glyphicon-home" aria-hidden="true"></span> <span class="glyphicon glyphicon-home" aria-hidden="true"></span>
Projects Projects
</h1> </h1>
</div> </div>
<div class="col-md-3"></div> <div class="col-md-3"></div>
<div class="col-md-5"> <div class="col-md-5">
<p id="project" class="help"> <p id="project" class="help">
<br> <br>
<button id="add" type="button" class="btn btn-primary btn-lg help" data-container="body" data-toggle="popover" data-placement="bottom"> <button id="add" type="button" class="btn btn-primary btn-lg help" data-container="body" data-toggle="popover" data-placement="bottom">
<span class="glyphicon glyphicon-plus" aria-hidden="true"></span> <span class="glyphicon glyphicon-plus" aria-hidden="true"></span>
Add a new project Add a new project
</button> </button>
<div id="popover-content" class="hide"> <div id="popover-content" class="hide">
<div id="createForm" class="form-group"> <form>
{% csrf_token %} <div id="createForm" class="form-group">
<div id="status-form" class="collapse"> {% csrf_token %}
</div> <div id="status-form" class="collapse"></div>
<div class="row inline">
<label class="col-lg-3" for="inputName" ><span class="pull-right">Name:</span></label> <div class="row inline">
<input class="col-lg-8" type="text" id="inputName" class="form-control"> <label class="col-lg-3" for="inputName" ><span class="pull-right">Name:</span></label>
</div> <input class="col-lg-8" type="text" id="inputName" class="form-control">
</div>
<div class="row inline">
<div class="col-lg-3"></div> <div class="row inline">
<button id="createProject" class="btn btn-primary btn-sm col-lg-8 push-left">Add Project</button> <div class="col-lg-3"></div>
<div class="col-lg-2"></div> <button id="createProject" class="btn btn-primary btn-sm col-lg-8 push-left">Add Project</button>
<div class="col-lg-2"></div>
</div>
</div>
</form>
</div> </div>
</div> </p>
</div> </div>
</p>
</div> </div>
</div> </div>
</div> </div>
...@@ -87,7 +90,7 @@ ...@@ -87,7 +90,7 @@
</div> </div>
<!-- CHECKBOX EDITION --> <!-- CHECKBOX EDITION -->
<!-- <!--
<div class="row collapse" id="editor"> <div class="row collapse" id="editor">
<button title="delete selected project" type="button" class="btn btn-danger" id="delete"> <button title="delete selected project" type="button" class="btn btn-danger" id="delete">
<span class="glyphicon glyphicon-trash " aria-hidden="true" ></span> <span class="glyphicon glyphicon-trash " aria-hidden="true" ></span>
...@@ -98,9 +101,8 @@ ...@@ -98,9 +101,8 @@
<!-- <button type="button" class="btn btn-info" id="recalculate"> <!-- <button type="button" class="btn btn-info" id="recalculate">
<span class="glyphicon glyphicon-refresh " aria-hidden="true" onclick="recalculateProjects()"></span> <span class="glyphicon glyphicon-refresh " aria-hidden="true" onclick="recalculateProjects()"></span>
</button> </button>
-->
</div> </div>
-->
<br /> <br />
......
...@@ -675,7 +675,7 @@ ...@@ -675,7 +675,7 @@
$("#submit_thing").prop('disabled' , false) $("#submit_thing").prop('disabled' , false)
//$("#submit_thing").attr('onclick', testCERN(query, N)); //$("#submit_thing").attr('onclick', testCERN(query, N));
$("#submit_thing").on("click", function(){ $("#submit_thing").on("click", function(){
saveMultivac(pubmedquery, N); saveMultivac(pubmedquery, N, "/moissonneurs/multivac/save/");
//$("#submit_thing").onclick() //$("#submit_thing").onclick()
})} })}
//(N > {{query_size}}) //(N > {{query_size}})
...@@ -684,7 +684,7 @@ ...@@ -684,7 +684,7 @@
$('#submit_thing').prop('disabled', false); $('#submit_thing').prop('disabled', false);
$("#submit_thing").html("Processing a sample file") $("#submit_thing").html("Processing a sample file")
$("#submit_thing").on("click", function(){ $("#submit_thing").on("click", function(){
saveMultivac(pubmedquery, N); saveMultivac(pubmedquery, N,"/moissonneurs/multivac/save/" );
//$("#submit_thing").onclick() //$("#submit_thing").onclick()
})} })}
} }
...@@ -708,7 +708,6 @@ ...@@ -708,7 +708,6 @@
//HAL = 11 //HAL = 11
if (SourceTypeId == "11"){ if (SourceTypeId == "11"){
$.ajax({ $.ajax({
// contentType: "application/json", // contentType: "application/json",
...@@ -736,7 +735,7 @@ ...@@ -736,7 +735,7 @@
$("#submit_thing").prop('disabled' , false) $("#submit_thing").prop('disabled' , false)
//$("#submit_thing").attr('onclick', testCERN(query, N)); //$("#submit_thing").attr('onclick', testCERN(query, N));
$("#submit_thing").on("click", function(){ $("#submit_thing").on("click", function(){
saveALL(pubmedquery, N); save(pubmedquery, N, "/moissonneurs/hal/save/");
//$("#submit_thing").onclick() //$("#submit_thing").onclick()
})} })}
//(N > {{query_size}}) //(N > {{query_size}})
...@@ -745,7 +744,7 @@ ...@@ -745,7 +744,7 @@
$('#submit_thing').prop('disabled', false); $('#submit_thing').prop('disabled', false);
$("#submit_thing").html("Processing a sample file") $("#submit_thing").html("Processing a sample file")
$("#submit_thing").on("click", function(){ $("#submit_thing").on("click", function(){
saveALL(pubmedquery, N); save(pubmedquery, N, "/moissonneurs/hal/save/");
//$("#submit_thing").onclick() //$("#submit_thing").onclick()
})} })}
} }
...@@ -768,6 +767,69 @@ ...@@ -768,6 +767,69 @@
} }
//HAL = 12
if (SourceTypeId == "12"){
$.ajax({
// contentType: "application/json",
url: window.location.origin+"/moissonneurs/isidore/query",
data: formData,
type: 'POST',
beforeSend: function(xhr) {
xhr.setRequestHeader("X-CSRFToken", getCookie("csrftoken"));
},
success: function(data) {
console.log(data)
console.log("SUCCESS")
console.log("enabling "+"#"+value.id)
// $("#"+value.id).attr('onclick','getGlobalResults(this);');
$("#submit_thing").prop('disabled' , false)
//$("#submit_thing").html("Process a {{ query_size }} sample!")
N = data["results_nb"]
if(N > 0) {
if (N <= {{query_size}}){
$("#theresults").html("<i> <b>"+pubmedquery+"</b>: "+N+" publications </i><br>")
$("#submit_thing").html("Download!")
$("#submit_thing").prop('disabled' , false)
//$("#submit_thing").attr('onclick', testCERN(query, N));
$("#submit_thing").on("click", function(){
save(pubmedquery, N, "/moissonneurs/isidore/save/");
//$("#submit_thing").onclick()
})}
//(N > {{query_size}})
else {
$("#theresults").html("<i> <b>"+pubmedquery+"</b>: "+N+" publications </i><br>")
$('#submit_thing').prop('disabled', false);
$("#submit_thing").html("Processing a sample file")
$("#submit_thing").on("click", function(){
save(pubmedquery, N, "/moissonneurs/isidore/save/");
//$("#submit_thing").onclick()
})}
}
else {
$("#theresults").html("<i> <b>"+pubmedquery+"</b>: No results!.</i><br>")
if(data[0]==false)
$("#theresults").html(theType +" connection error!</i><br>")
$('#submit_thing').prop('disabled', true);
}
},
error: function(result) {
$("#theresults").html(theType +" connection error</i><br>")
$('#submit_thing').prop('disabled', true);
}
});
}
} }
// CSS events for selecting one Radio-Input // CSS events for selecting one Radio-Input
...@@ -819,6 +881,7 @@ ...@@ -819,6 +881,7 @@
|| selectedId == "9" || selectedId == "9"
|| selectedId == "10" || selectedId == "10"
|| selectedId == "11" || selectedId == "11"
|| selectedId == "12"
) { ) {
console.log("show the button for: " + selectedId) console.log("show the button for: " + selectedId)
$("#div-fileornot").css("visibility", "visible"); $("#div-fileornot").css("visibility", "visible");
...@@ -1001,7 +1064,7 @@ ...@@ -1001,7 +1064,7 @@
}); });
} }
function saveALL(query, N){ function save(query, N, urlGarg){
console.log("In Gargantext") console.log("In Gargantext")
if(!query || query=="") return; if(!query || query=="") return;
...@@ -1016,7 +1079,7 @@ ...@@ -1016,7 +1079,7 @@
console.log(data) console.log(data)
$.ajax({ $.ajax({
dataType: 'json', dataType: 'json',
url: window.location.origin+"/moissonneurs/hal/save/"+projectid, url: window.location.origin + urlGarg + projectid,
data: data, data: data,
type: 'POST', type: 'POST',
beforeSend: function(xhr) { beforeSend: function(xhr) {
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment