Commit 97796ab6 authored by Romain Loth's avatar Romain Loth

merge unstable (graphExplorer subtree + docker installs)

parents bb6fc256 0582e390
......@@ -31,7 +31,13 @@
$rootScope.projectId = path[1];
$rootScope.corpusId = path[2];
$rootScope.docId = path[3];
$rootScope.focusNgram = path[4];
// ex: ["483", "3561", "9754", "35183"]
// (passed from graphExplorer selections)
if (path[4])
$rootScope.focusNgrams = path[4].split(",");
else
$rootScope.focusNgrams = []
// -------------------------------
// shared toolbox (functions useful for several modules) -------------------
......
......@@ -402,10 +402,10 @@
* @param $rootScope (global) to check activeLists and list names
*
* add-on mechanism:
* @param focusNgram: an ngram_id to higlight more
* @param focusNgrams: some ngram_ids to higlight more
* (it is assumed to be already in one of the active lists)
*/
function compileNgramsHtml(annotations, textMapping, $rootScope, focusNgram) {
function compileNgramsHtml(annotations, textMapping, $rootScope, focusNgrams) {
if (typeof $rootScope.activeLists == "undefined") return;
if (_.keys($rootScope.activeLists).length === 0) return;
var templateBegin = "<span ng-controller='TextSelectionController' ng-click='onClick($event)' class='keyword-inline'>";
......@@ -508,8 +508,15 @@
// 2nd pass for result html
// =========================
// first pass for anchors
// ======================
// a small lookup for possible focus items (they'll get different css)
var checkFocusOn = {}
if (focusNgrams) {
for (var i in focusNgrams) {
var focusNgramId = focusNgrams[i]
checkFocusOn[focusNgramId] = true
}
}
angular.forEach(sortedSizeAnnotations, function (annotation) {
// again exclude ngrams that are into inactive lists
if ($rootScope.activeLists[annotation.list_id] === undefined) return;
......@@ -517,8 +524,9 @@
// listName now used to setup css class
var cssClass = $rootScope.lists[annotation.list_id];
// except if FOCUS
if (focusNgram && (annotation.uuid == focusNgram || annotation.group == focusNgram)) {
// except if uuid or group mainform is in FOCUS items
if (focusNgrams &&
(checkFocusOn[annotation.uuid] || checkFocusOn[annotation.group])) {
cssClass = "FOCUS"
}
......@@ -610,7 +618,7 @@
'#title': angular.copy($rootScope.title)
},
$rootScope,
$rootScope.focusNgram // new: optional focus ngram
$rootScope.focusNgrams // optional focus ngrams
);
// inject highlighted HTML
angular.forEach(result, function(html, eltId) {
......
File mode changed from 100644 to 100755
Install Instructions for Gargantext (CNRS):
#Install Instructions for Gargantext (CNRS):
## Get the source code
by cloning gargantext into /srv/gargantext
``` bash
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin stable \
&& git checkout stable \
```
Help needed ?
See [http://gargantext.org/about](http://gargantext.org/about) and [tools]() for the community
The folder will be /srv/gargantext:
* docs containes all informations on gargantext
/srv/gargantext/docs/
* install contains all the installation files
/srv/gargantext/install/
Prepare your environnement and make the initial installation.
Once you setup and install the Gargantext box. You can use ./install/run.sh utility
to load gargantext web plateform and access it throught your web browser
Help needed ?
See [http://gargantext.org/about](http://gargantext.org/about) and [tools](./contribution_guide.md) for the community
______________________________
Two installation procedure are provided:
1. [Prerequisites](##Prerequisites)
1. Semi-automatic installation [EASY]
2. Step by step installation [ADVANCED]
2. [SETUP](##Setup)
Here only semi-automatic installation is covered checkout [manual_install](manual_install.md)
to follow step by step procedure
3. [INSTALL](##Install)
4. [RUN](##RUN)
______________________________
##Prerequisites
## Init Setup
## Install
## Run
--------------------
# Semi automatic installation
All the procedure files are located into /srv/garantext/install/
``` bash
user@computer:$ cd /srv/garantext/install/
```
## Prerequisites
* A Debian based OS >= [FIXME]
* At least 35GO in the desired location of Gargantua [FIXME]
* At least 35GO in /srv/ [FIXME]
todo: reduce the size of gargantext lib
todo: remove lib once docker is configure
todo: remove lib once docker is configured
tip: if you have enought space for the full package you can:
! tip: if you have enought space for the full package you can:
* resize your partition
* make a simlink on gargantext_lib
* A [docker engine installation](https://docs.docker.com/engine/installation/linux/)
##Setup
Prepare your environnement and make the initial setup.
Setup can be done in 2 ways:
* [automatic setup](setup.sh) can be done by using the setup script provided [here](setup.sh)
* [manual setup](manual_setup.md) if you want to change some parameters [here](manual_setup.md)
##Install
Two installation procedure are actually proposed:
* the docker way [easy]
* the debian way [advanced]
####DOCKER WAY [EASY]
##Init Setup
Prepare your environnement and make the initial setup.
* Install docker
See [installation instruction for your distribution](https://docs.docker.com/engine/installation/)
This initial step creates a user for gargantext plateform along with dowloading additionnal libs and files.
* Build your docker image
It also install docker and build the docker image and build the gargantext box
``` bash
cd /srv/gargantext/install/docker/dev
./build
ID=$(docker build .) && docker run -i -t $ID
user@computer:/srv/garantext/install/$ .init.sh
```
You should see
```
Successfully built <container_id>
```
### Install
Once the init step is done
* Enter into the docker environnement
Inside folder /srv/garantext/install/
enter the gargantext image
``` bash
./srv/gargantext/install/docker/enterGargantextImage
user@computer:/srv/garantext/install/$ .docker/enterGargantextImage
```
go to the installation folder
``` bash
root@dockerimage8989809:$ cd /srv/gargantext/install/
```
[ICI] Tester si les config de postgresql et python sont faits en amont à la création du docker file
* Install Python environment
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/python/configure
root@dockerimage8989809:/srv/garantext/install/$ python/configure
```
* Configure PostgreSql
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/postgres/configure
```
* Exit the docker
```
exit (or Ctrl+D)
root@computer:/srv/garantext/install/$ postgres/configure
```
[Si OK ] enlever ses lignes
Install Gargantext server
* Enter docker container
``` bash
/srv/gargantext/install/docker/enterGargantextImage
```
* Configure the database
Inside the docker container:
``` bash
service postgresql start
#su gargantua
#activate the virtualenv
source /srv/env_3-5/bin/activate
python /srv/gargantext/dbmigrate.py
/srv/gargantext/manage.py makemigrations
/srv/gargantext/manage.py migrate
python /srv/gargantext/dbmigrate.py
```
You have entered the virtualenv as shown with (env_3-5)
``` bash
(env_3-5) $ python /srv/gargantext/dbmigrate.py
(env_3-5) $ /srv/gargantext/manage.py makemigrations
(env_3-5) $ /srv/gargantext/manage.py migrate
(env_3-5) $ python /srv/gargantext/dbmigrate.py
#will create tables and not hyperdata_nodes
python /srv/gargantext/dbmigrate.py
(env_3-5) $ python /srv/gargantext/dbmigrate.py
#will create table hyperdata_nodes
#launch first time the server to create first user
/srv/gargantext/manage.py runserver 0.0.0.0:8000
/srv/gargantext/init_accounts.py /srv/gargantext/install/init/account.csv
(env_3-5) $ /srv/gargantext/manage.py runserver 0.0.0.0:8000
(env_3-5) $ /srv/gargantext/init_accounts.py /srv/gargantext/install/init/account.csv
```
FIXME: dbmigrate need to launched several times since tables are
ordered with alphabetical order (and not dependencies order)
####Debian way [advanced]
##Run Gargantext
* Launch Gargantext
* Exit the docker
```
exit (or Ctrl+D)
```
## Run Gargantext
Enter the docker container:
``` bash
......@@ -126,31 +141,30 @@ Enter the docker container:
```
Inside the docker container:
``` bash
#start postgresql
#start Database (postgresql)
service postgresql start
#change to user
su gargantua
#activate the virtualenv
source /srv/env_3-5/bin/activate
#go to gargantext srv
cd /srv/gargantext/
(env_3-5) $ cd /srv/gargantext/
#run the server
/manage.py runserver 0.0.0.0:8000
(env_3-5) $ /manage.py runserver 0.0.0.0:8000
```
* Launch browser
outside the docker
Keep it open and outside the docker launch browser
``` bash
chromium http://127.0.0.1:8000/
```
* Click on Test Gargantext
```
Login : gargantua
Password : autnagrag
```
Enjoy :)
See [User Guide](/demo/tuto.md) for quick usage example
......@@ -293,3 +293,15 @@ RULE_NPN = "{<JJ.*>*<NN.*>+((<P|IN> <DT>? <JJ.*>* <NN.*>+ <JJ.*>*)|(<JJ.*>))*
RULE_TINA = "^((VBD,|VBG,|VBN,|CD.?,|JJ.?,|\?,){0,2}?(N.?.?,|\?,)+?(CD.,)??)\
+?((PREP.?|DET.?,|IN.?,|CC.?,|\?,)((VBD,|VBG,|VBN,|CD.?,|JJ.?,|\?\
,){0,2}?(N.?.?,|\?,)+?)+?)*?$"
# ------------------------------------------------------------------------------
# Graph constraints to compute the graph:
# Modes: live graph generation, graph asynchronously computed or errors detected
# here are the maximum size of corpus and maplist required to compute the graph
graph_constraints = {'corpusMax' : 400
,'corpusMin' : 10
,'mapList' : 50
}
......@@ -23,7 +23,7 @@ BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
SECRET_KEY = '!%ktkh981)piil1%t5r0g4$^0=uvdafk!=f2x8djxy7_gq(n5%'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
DEBUG = False
MAINTENANCE = False
ALLOWED_HOSTS = [ 'localhost'
......@@ -39,7 +39,7 @@ import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://guest:guest@localhost:5672/'
CELERY_ACCEPT_CONTENT = ['pickle', 'json', 'msgpack', 'yaml']
CELERY_IMPORTS = ("gargantext.util.toolchain")
CELERY_IMPORTS = ("gargantext.util.toolchain", "graph.cooccurrences")
# garg's custom unittests runner (adapted to our db models)
......@@ -57,7 +57,7 @@ INSTALLED_APPS = [
'rest_framework',
'djcelery',
'annotations',
'graphExplorer',
'graph',
'moissonneurs',
]
......
......@@ -21,9 +21,8 @@ import gargantext.views.pages.urls
from annotations import urls as annotations_urls
from annotations.views import main as annotations_main_view
# Module "Graph Explorer"
#from graphExplorer import urls as graphExplorer_urls
import graphExplorer.urls
# Module for graph service
import graph.urls
# Module Scrapers
import moissonneurs.urls
......@@ -35,8 +34,8 @@ urlpatterns = [ url(r'^admin/' , admin.site.urls )
, url(r'^favicon.ico$', Redirect.as_view( url=static.url('favicon.ico')
, permanent=False), name="favicon")
# Module "Graph Explorer"
, url(r'^' , include( graphExplorer.urls ) )
# Module Graph
, url(r'^' , include( graph.urls ) )
# Module Annotation
# tempo: unchanged doc-annotations routes --
......
......@@ -176,7 +176,6 @@ def parse_extract_indexhyperdata(corpus):
session.commit()
@shared_task
def recount(corpus):
"""
......
......@@ -5,6 +5,7 @@ from . import ngrams
from . import metrics
from . import ngramlists
from . import analytics
from graph.rest import Graph
urlpatterns = [ url(r'^nodes$' , nodes.NodeListResource.as_view() )
, url(r'^nodes/(\d+)$' , nodes.NodeResource.as_view() )
......@@ -61,4 +62,11 @@ urlpatterns = [ url(r'^nodes$' , nodes.NodeListResource.as_view()
, url(r'^ngramlists/maplist$' , ngramlists.MapListGlance.as_view() )
# fast access to maplist, similarly formatted for termtable
, url(r'^projects/(\d+)/corpora/(\d+)/explorer$' , Graph.as_view())
# data for graph explorer (json)
# GET /api/projects/43198/corpora/111107/explorer?
# Corresponding view is : /projects/43198/corpora/111107/explorer?
# Parameters (example):
# explorer?field1=ngrams&field2=ngrams&distance=conditional&bridgeness=5&start=1996-6-1&end=2002-10-5
]
Module Graph Explorer: from text to graph
=========================================
## How to contribute ?
Some solutions:
1) please report to dev@gargantext.org
2) fix with git repo and pull request
## Graph Explorer main
0) All urls.py of the Graph Explorer
1) Main view of the graph explorer: views.py
2) Data are retrieved as REST: rest.py
3) Graph is generated (graph.py) through different steps
a) check the constraints (graph_constraints) in gargantext/constants.py
b) Cooccurences are computed (in live or asynchronously): cooccurrences.py
c) Thresold and distances : distances.py
d) clustering: louvain.py
c) links between communities: bridgeness.py
4) Additional features:
a) intersection of graphs: intersection.py
## TODO
1) save parameters in hyperdata
2) graph explorer:
* save current graph
2) myGraphs view:
* progress bar
* show parameters
* copy / paste and change some parameters to generate new graph
......@@ -13,13 +13,12 @@ def filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2):
nodesB_dict = {}
for node_id in G.nodes():
#node,type(labels[node])
G.node[node_id]['pk'] = ids[node_id][1]
nodesB_dict [ ids[node_id][1] ] = True
# TODO the query below is not optimized (do it do_distance).
the_label = session.query(Ngram.terms).filter(Ngram.id==node_id).first()
the_label = ", ".join(the_label)
G.node[node_id]['label'] = the_label
G.node[node_id]['size'] = weight[node_id]
G.node[node_id]['type'] = ids[node_id][0].replace("ngrams","terms")
G.node[node_id]['attributes'] = { "clust_default": partition[node_id]} # new format
......@@ -31,7 +30,7 @@ def filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2):
if bridgeness > 0:
com_link = defaultdict(lambda: defaultdict(list))
com_ids = defaultdict(list)
for k, v in partition.items():
com_ids[v].append(k)
......@@ -39,14 +38,14 @@ def filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2):
s = e[0]
t = e[1]
weight = G[ids[s][1]][ids[t][1]]["weight"]
if bridgeness < 0:
info = { "s": ids[s][1]
, "t": ids[t][1]
, "w": weight
}
links.append(info)
else:
if partition[s] == partition[t]:
......@@ -55,11 +54,11 @@ def filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2):
, "w": weight
}
links.append(info)
if bridgeness > 0:
if partition[s] < partition[t]:
com_link[partition[s]][partition[t]].append((s,t,weight))
if bridgeness > 0:
for c1 in com_link.keys():
for c2 in com_link[c1].keys():
......
......@@ -9,7 +9,17 @@ from sqlalchemy import desc, asc, or_, and_
#import inspect
import datetime
def countCooccurrences( corpus=None
from celery import shared_task
def filterMatrix(matrix, mapList_id, groupList_id):
mapList = UnweightedList( mapList_id )
group_list = Translations ( groupList_id )
cooc = matrix & (mapList * group_list)
return cooc
@shared_task
def countCooccurrences( corpus_id=None , test= False
, field1='ngrams' , field2='ngrams'
, start=None , end=None
, mapList_id=None , groupList_id=None
......@@ -39,8 +49,12 @@ def countCooccurrences( corpus=None
# Security test
field1,field2 = str(field1), str(field2)
# Get corpus as Python object
corpus = session.query(Node).filter(Node.id==corpus_id).first()
# Get node
if not coocNode_id:
coocNode_id0 = ( session.query( Node.id )
.filter( Node.typename == "COOCCURRENCES"
, Node.name == "GRAPH EXPLORER"
......@@ -182,15 +196,14 @@ def countCooccurrences( corpus=None
cooc_query = cooc_query.group_by(NodeHyperdataNgram.ngram_id, NodeNgramY.ngram_id)
# Order according some scores
cooc_query = cooc_query.order_by(desc('cooc_score'))
# If ordering is really needed, use Ordered Index (faster)
#cooc_query = cooc_query.order_by(desc('cooc_score'))
matrix = WeightedMatrix(cooc_query)
mapList = UnweightedList( mapList_id )
group_list = Translations ( groupList_id )
cooc = matrix & (mapList * group_list)
cooc = filterMatrix(matrix, mapList_id, groupList_id)
if save_on_db:
cooc.save(coocNode_id)
return(coocNode_id)
else:
return cooc
print("Cooccurrence Matrix saved")
return cooc
......@@ -2,7 +2,7 @@ from gargantext.models import Node, NodeNgram, NodeNgramNgram, \
NodeHyperdata
from gargantext.util.db import session, aliased
from graphExplorer.louvain import best_partition
from graph.louvain import best_partition
from copy import copy
from collections import defaultdict
......
# Gargantext lib
from gargantext.util.db import session, aliased
from gargantext.util.lists import WeightedMatrix, UnweightedList, Translations
from gargantext.util.http import JsonHttpResponse
from gargantext.models import Node, Ngram, NodeNgram, NodeNgramNgram, NodeHyperdata
#from gargantext.util.toolchain.ngram_coocs import compute_coocs
from graph.cooccurrences import countCooccurrences, filterMatrix
from graph.distances import clusterByDistances
from graph.bridgeness import filterByBridgeness
from gargantext.util.scheduling import scheduled
from gargantext.constants import graph_constraints
from datetime import datetime
def get_graph( request=None , corpus=None
, field1='ngrams' , field2='ngrams'
, mapList_id = None , groupList_id = None
, cooc_id=None , type='node_link'
, start=None , end=None
, threshold=1
, distance='conditional'
, isMonopartite=True # By default, we compute terms/terms graph
, bridgeness=5
#, size=1000
):
'''
Get_graph : main steps:
0) Check the parameters
get_graph :: GraphParameters -> Either (Dic Nodes Links) (Dic State Length)
where type Length = Int
get_graph first checks the parameters and return either graph data or a dic with
state "type" with an integer to indicate the size of the parameter
(maybe we could add a String in that step to factor and give here the error message)
1) count Cooccurrences (function countCooccurrences)
main parameters: threshold
2) filter and cluster By Distances (function clusterByDistances)
main parameter: distance
3) filter By Bridgeness (filter By Bridgeness)
main parameter: bridgness
4) format the graph (formatGraph)
main parameter: format_
'''
before_cooc = datetime.now()
# case of Cooccurrences have not been computed already
if cooc_id == None:
# case of mapList not big enough
# ==============================
# if we do not have any mapList_id already
if mapList_id is None:
mapList_id = session.query(Node.id).filter(Node.typename == "MAPLIST").first()[0]
mapList_size_query = session.query(NodeNgram).filter(NodeNgram.node_id == mapList_id)
mapList_size = mapList_size_query.count()
if mapList_size < graph_constraints['mapList']:
# Do not compute the graph if mapList is not big enough
return {'state': "mapListError", "length" : mapList_size}
# case of corpus not big enough
# ==============================
corpus_size_query = (session.query(Node)
.filter(Node.typename=="DOCUMENT")
.filter(Node.parent_id == corpus.id)
)
# filter by date if any start date
# --------------------------------
if start is not None:
#date_start = datetime.datetime.strptime ("2001-2-3 10:11:12", "%Y-%m-%d %H:%M:%S")
date_start = datetime.strptime (str(start), "%Y-%m-%d")
date_start_utc = date_start.strftime("%Y-%m-%d %H:%M:%S")
Start=aliased(NodeHyperdata)
corpus_size_query = (corpus_size_query.join( Start
, Start.node_id == Node.id
)
.filter( Start.key == 'publication_date')
.filter( Start.value_utc >= date_start_utc)
)
# filter by date if any end date
# --------------------------------
if end is not None:
date_end = datetime.strptime (str(end), "%Y-%m-%d")
date_end_utc = date_end.strftime("%Y-%m-%d %H:%M:%S")
End=aliased(NodeHyperdata)
corpus_size_query = (corpus_size_query.join( End
, End.node_id == Node.id
)
.filter( End.key == 'publication_date')
.filter( End.value_utc <= date_end_utc )
)
# Finally test if the size of the corpora is big enough
# --------------------------------
corpus_size = corpus_size_query.count()
if corpus_size > graph_constraints['corpusMax']:
# Then compute cooc asynchronously with celery
scheduled(countCooccurrences)( corpus_id=corpus.id
#, field1="ngrams", field2="ngrams"
, start=start , end =end
, mapList_id=mapList_id , groupList_id=groupList_id
, isMonopartite=True , threshold = threshold
, save_on_db = True
#, limit=size
)
# Dic to inform user that corpus maximum is reached then
# graph is computed asynchronously
return {"state" : "corpusMax", "length" : corpus_size}
elif corpus_size <= graph_constraints['corpusMin']:
# Do not compute the graph if corpus is not big enough
return {"state" : "corpusMin", "length" : corpus_size}
else:
# If graph_constraints are ok then compute the graph in live
cooc_matrix = countCooccurrences( corpus_id=corpus.id
#, field1="ngrams", field2="ngrams"
, start=start , end =end
, mapList_id=mapList_id , groupList_id=groupList_id
, isMonopartite=True , threshold = threshold
, save_on_db = False
#, limit=size
)
else:
print("Getting data for matrix %d", int(cooc_id))
matrix = WeightedMatrix(int(cooc_id))
#print(matrix)
cooc_matrix = filterMatrix(matrix, mapList_id, groupList_id)
# fyi
after_cooc = datetime.now()
print("... Cooccurrences took %f s." % (after_cooc - before_cooc).total_seconds())
# case when 0 coocs are observed (usually b/c not enough ngrams in maplist)
if len(cooc_matrix.items) == 0:
print("GET_GRAPH: 0 coocs in matrix")
data = {'nodes':[], 'links':[]} # empty data
# normal case
else:
G, partition, ids, weight = clusterByDistances ( cooc_matrix
, field1="ngrams", field2="ngrams"
, distance=distance
)
after_cluster = datetime.now()
print("... Clustering took %f s." % (after_cluster - after_cooc).total_seconds())
data = filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2)
after_filter = datetime.now()
print("... Filtering took %f s." % (after_filter - after_cluster).total_seconds())
return data
This diff is collapsed.
from django.conf.urls import patterns, url
# Module "Graph Explorer"
from graphExplorer.rest import Graph
from graphExplorer.views import explorer
from graphExplorer.intersection import intersection
from graph.rest import Graph
from graph.views import explorer, myGraphs
from graph.intersection import intersection
# TODO : factor urls
......@@ -12,8 +12,9 @@ from graphExplorer.intersection import intersection
# ^explorer/$corpus_id/data.json
# ^explorer/$corpus_id/intersection
urlpatterns = [ url(r'^explorer/intersection/(\w+)$', intersection )
, url(r'^projects/(\d+)/corpora/(\d+)/explorer$', explorer )
, url(r'^projects/(\d+)/corpora/(\d+)/graph$' , Graph.as_view())
, url(r'^projects/(\d+)/corpora/(\d+)/node_link.json$', Graph.as_view())
# GET ^api/projects/(\d+)/corpora/(\d+)/explorer$ -> data in json format
urlpatterns = [ url(r'^projects/(\d+)/corpora/(\d+)/explorer$' , explorer )
, url(r'^projects/(\d+)/corpora/(\d+)/myGraphs$' , myGraphs )
, url(r'^explorer/intersection/(\w+)$' , intersection )
]
......@@ -25,21 +25,58 @@ def explorer(request, project_id, corpus_id):
# and the project just for project.id in corpusBannerTop
project = cache.Node[project_id]
graphurl = "projects/" + str(project_id) + "/corpora/" + str(corpus_id) + "/node_link.json"
# rendered page : explorer.html
return render(
template_name = 'graphExplorer/explorer.html',
request = request,
context = {
'debug' : settings.DEBUG ,
'request' : request ,
'user' : request.user ,
'date' : datetime.now() ,
'project' : project ,
'corpus' : corpus ,
'maplist_id': maplist_id ,
'view' : 'graph' ,
},
)
@requires_auth
def myGraphs(request, project_id, corpus_id):
'''
List all of my Graphs
'''
user = cache.User[request.user.id]
# we pass our corpus
corpus = cache.Node[corpus_id]
# and the project just for project.id in corpusBannerTop
project = cache.Node[project_id]
coocs = corpus.children('COOCCURRENCES', order=True).all()
coocs_count = dict()
for cooc in coocs:
cooc_nodes = session.query(NodeNgramNgram).filter(NodeNgramNgram.node_id==cooc.id).count()
coocs_count[cooc.id] = cooc_nodes
return render(
template_name = 'pages/corpora/myGraphs.html',
request = request,
context = {
'debug' : settings.DEBUG,
'request' : request,
'user' : request.user,
'date' : datetime.now(),
'project' : project,
'resourcename' : resourcename(corpus),
'corpus' : corpus,
'maplist_id': maplist_id,
'graphfile' : graphurl,
'view' : 'graph'
'view' : 'myGraph',
'coocs' : coocs,
'coocs_count' : coocs_count
},
)
Module Graph Explorer: from text to graph.
Maintainer: If you see bugs, please report to team@gargantext.org
# Gargantext lib
from gargantext.util.db import session
from gargantext.util.http import JsonHttpResponse
from gargantext.models import Node, Ngram, NodeNgram, NodeNgramNgram
#from gargantext.util.toolchain.ngram_coocs import compute_coocs
from graphExplorer.cooccurrences import countCooccurrences
from graphExplorer.distances import clusterByDistances
from graphExplorer.bridgeness import filterByBridgeness
# Prelude lib
from copy import copy, deepcopy
from collections import defaultdict
from sqlalchemy.orm import aliased
# Math/Graph lib
import math
import pandas as pd
import numpy as np
import networkx as nx
def get_graph( request=None , corpus=None
, field1='ngrams' , field2='ngrams'
, mapList_id = None , groupList_id = None
, cooc_id=None , type='node_link'
, start=None , end=None
, threshold=1
, distance='conditional'
, isMonopartite=True # By default, we compute terms/terms graph
, bridgeness=5
#, size=1000
):
'''
Get_graph : main steps:
1) count Cooccurrences (function countCooccurrences)
main parameters: threshold
2) filter and cluster By Distances (function clusterByDistances)
main parameter: distance
3) filter By Bridgeness (filter By Bridgeness)
main parameter: bridgness
4) format the graph (formatGraph)
main parameter: format_
'''
from datetime import datetime
before_cooc = datetime.now()
# TODO change test here (always true)
# to something like "if cooc.status threshold == required_threshold
# and group.creation_time < cooc.creation_time"
# if False => read and give to clusterByDistances
# if True => compute and give to clusterByDistances <==
if cooc_id == None:
cooc_matrix = countCooccurrences( corpus=corpus
#, field1="ngrams", field2="ngrams"
, start=start , end =end
, mapList_id=mapList_id , groupList_id=groupList_id
, isMonopartite=True , threshold = threshold
, save_on_db = False
#, limit=size
)
else:
cooc_matrix = WeightedMatrix(cooc_id)
# fyi
after_cooc = datetime.now()
print("... Cooccurrences took %f s." % (after_cooc - before_cooc).total_seconds())
G, partition, ids, weight = clusterByDistances ( cooc_matrix
, field1="ngrams", field2="ngrams"
, distance=distance
)
after_cluster = datetime.now()
print("... Clustering took %f s." % (after_cluster - after_cooc).total_seconds())
data = filterByBridgeness(G,partition,ids,weight,bridgeness,type,field1,field2)
after_filter = datetime.now()
print("... Filtering took %f s." % (after_filter - after_cluster).total_seconds())
return data
```
#!bin/bash
#name:01-setup
echo "****************SETUP**************************";
for dir in "/srv/gargantext_lib" "/srv/gargantext_static" "/srv/gargantext_media"; do
sudo mkdir -p $dir ;
sudo chown gargantua:gargantua $dir ;
done;
sudo wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& sudo tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo ":::::::::::::::::Done::::::::::::::::::::::::::";
#TODO clone the repo into /srv/gargantext/ and reduce the different steps
#git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
# && cd /srv/gargantext \
# && git fetch origin stable \
# && git checkout stable \
```
#!/bin/bash
#configure the base image gargamelle
echo '****************BUILD**********************************'
docker build -t gargamelle:latest ./gargamelle
#2 option with this image:
# configure the container
# run the image with the app in it
echo '::::::::::::::::::::GARGAMELLE IMAGE BUILT:::::::::::::'
echo '*************CONFIG************************************'
sudo docker run \
-v /srv/:/srv/ \
-p 8000 \
-p 5432 \
-it gargamelle:latest \
/bin/bash -c "/srv/gargantext/install/gargamelle/psql_configure.sh"
sudo docker rm -f `docker ps -a | grep -v CONTAINER | awk '{print $1 }'`
sudo docker run \
-v /srv/:/srv/ \
-p 8000 \
-p 5432 \
-it gargamelle:latest \
/bin/bash -c "/srv/gargantext/install/gargamelle/django_configure.sh"
sudo docker rm -f `docker ps -a | grep -v CONTAINER awk '{print $1 }'`
#!/bin/bash
sudo docker run \
-v /srv/:/srv/\
-p 8000 \
-p 5432 \
-it gargamelle:latest \
# /bin/bash -c "service postgresql start; su gargantua -c \'source /env_3-5/bin/activate && /srv/gargantext/manage.py runserver 0.0.0.0:8000\'"
# Migration from Gargantext < 3.0.0 versions towards >= 3.*
## Installation
First, install Python 3.5 (see https://www.python.org/downloads/ for
download links).
```bash
cd /tmp
wget https://www.python.org/ftp/python/3.5.1/Python-3.5.1.tar.xz
tar xvfJ Python-3.5.1.tar.xz
cd Python-3.5.1
./configure
make -j4 # option is for multithreading
sudo make install
```
Other components are required:
```bash
sudo pip3.5 install virtualenv
sudo apt-get install rabbitmq-server
```
Then build a virtual environment:
```bash
virtualenv-3.5 VENV
source VENV/bin/activate
pip3.5 install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1
pip3.5 install -U -r requirements.txt
```
## Migrate database
### Django models
```bash
./manage.py makemigrations
./manage.py migrate --fake-initial
```
...or if it fails, try the commandes below:
```bash
./manage.py makemigrations
./manage.py migrate --run-syncdb
```
(see [Django documentation](https://docs.djangoproject.com/en/1.9/topics/migrations/))
### SQLAlchemy models
```bash
./dbmigrate.py
```
## Start the Django server
```bash
./manage.py celeryd --loglevel=INFO # to ensure Celery is properly started
./manage.py runserver
```
#Gargantext
==========
Install Instructions for Gargantext (CNRS):
1. [SETUP](##SETUP)
2. [INSTALL](##INSTALL)
3. [RUN](##RUN)
## Help needed ?
See http://gargantext.org/about and tools for the community
##SETUP
Prepare your environnement
Create user gargantua
Main user of Gargantext is Gargantua (role of Pantagruel soon)!
``` bash
sudo adduser --disabled-password --gecos "" gargantua
```
Create the directories you need
``` bash
for dir in "/srv/gargantext"
"/srv/gargantext_lib"
"/srv/gargantext_static"
"/srv/gargantext_media"
"/srv/env_3-5"; do
sudo mkdir -p $dir ;
sudo chown gargantua:gargantua $dir ;
done
```
You should see:
```bash
$tree /srv
/srv
├── gargantext
├── gargantext_lib
├── gargantext_media
│   └── srv
│   └── env_3-5
├── gargantext_static
└── lost+found [error opening dir]
```
## Get the source code of Gargantext
Clone the repository of gargantext
``` bash
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin unstable \
&& git checkout unstable \
```
**Optionnal**: if you want to contribute clone the repo into your own branch
``` bash
git checkout -b username-unstable unstable
```
! TODO (soon) : git clone https://gogs.iscpif.fr/gargantext.git
## SETUP
Build your OS dependencies
2 ways, for each you need to install Debian GNU/Linux dependencies.
1. [EASY] [Docker way](#DOCKER)
2. [EXPERT] [Debian way](#DEBIAN)
### DOCKER
* Install docker
See [installation instruction for your distribution](https://docs.docker.com/engine/installation/)
#### Build your docker image
``` bash
cd /srv/gargantext/install/docker/dev
./build
```
You should see
```
Successfully built <container_id>
```
#### Enter into the docker environnement
``` bash
./srv/gargantext/install/docker/enterGargantextImage
```
#### Install Python environment
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/python/configure
```
#### Configure PostgreSql
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/postgres/configure
```
#### Exit the docker
``` exit
```
#### Get main librairies
Can be long, so be patient :)
``` bash
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo "Libs installed"
```
### DEBIAN
[EXPERTS] Debian way (directory install/debian)
## INSTALL Gargantext
### Enter docker container
``` bash
/srv/gargantext/install/docker/enterGargantextImage
```
### Inside docker container configure the database
``` bash
service postgresql start
su gargantua
source /srv/env_3-5/bin/activate
python /srv/gargantext/dbmigrate.py
/srv/gargantext/manage.py migrate
python /srv/gargantext/dbmigrate.py
python /srv/gargantext/dbmigrate.py
echo "TODO: Init first user"
```
FIXME: dbmigrate need to launched several times since tables are
ordered with alphabetical order (and not dependencies order)
### Inside docker container launch Gargantext
``` bash
service postgresql start
su gargantua
source /srv/env_3-5/bin/activate
/srv/gargantext/manage.py runserver 0.0.0.0:8000
python /srv/gargantext/init_accounts.py /srv/gargantext/install/init/account.csv
```
## RUN
### Outside docker container launch browser
``` bash
chromium http://127.0.0.1:8000/
```
Click on Test Gargantext
Login : gargantua
Password : autnagrag
Enjoy :)
=======
Install Instructions for Gargantext (CNRS).
1. [SETUP](##SETUP)
2. [INSTALL](##INSTALL)
3. [RUN](##RUN)
## Support needed ?
See http://gargantext.org/about and tools for the community
## Setup
Prepare your environnement
Build your OS dependencies inside a docker
<<<<<<< HEAD
Main user of Gargantext is Gargantua (role of Pantagruel soon)!
>>>>>>> constance-docker
=======
>>>>>>> 8eac65c6d225deb05948c583367e8540ad95316d
``` bash
cd /srv/gargantext/install/docker/dev
./build
```
<<<<<<< HEAD
<<<<<<< HEAD
=======
## Create the directories you need
=======
## INSTALL
### Enter docker container
``` bash
/srv/gargantext/install/docker/enterGargantextImage
```
### Create the directories you need
>>>>>>> 8eac65c6d225deb05948c583367e8540ad95316d
``` bash
for dir in "/srv/gargantext"
"/srv/gargantext_lib"
"/srv/gargantext_static"
"/srv/gargantext_media"
"/srv/env_3-5"; do
sudo mkdir -p $dir ;
sudo chown gargantua:gargantua $dir ;
done
```
You should see:
```bash
$tree /srv
/srv
├── gargantext
├── gargantext_lib
├── gargantext_media
│   └── srv
│   └── env_3-5
└── gargantext_static
```
## Get the source code of Gargantext
```bash
cp ~/.ssh/id_rsa.pub id_rsa.pub
`
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin unstable \
&& git checkout unstable \
```
TODO (soon) : git clone https://gogs.iscpif.fr/gargantext.git
## Install Python environment
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/python/configure
```
## Configure PostgreSql
Inside the docker image, execute as root:
``` bash
/srv/gargantext/install/postgres/configure
```
## Get main librairies
Can be long, so be patient :)
``` bash
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& sudo tar xvjf gargantext_lib.tar.bz2 --directory /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo "Libs installed"
```
## Configure && Launch Gargantext
### Inside docker container configure the database
``` bash
service postgresql start
su gargantua
source /srv/env_3-5/bin/activate
python /srv/gargantext/dbmigrate.py
/srv/gargantext/manage.py migrate
python /srv/gargantext/dbmigrate.py
python /srv/gargantext/dbmigrate.py
python /srv/gargantext/init_accounts.py /srv/gargantext/install/init/account.csv
```
FIXME: dbmigrate need to launched several times since tables are
ordered with alphabetical order (and not dependencies order)
## RUN
Inside docker container launch Gargantext
``` bash
service postgresql start
su gargantua
source /srv/env_3-5/bin/activate
/srv/gargantext/manage.py runserver 0.0.0.0:8000
```
### Outside docker container launch browser
``` bash
chromium http://127.0.0.1:8000/
```
Click on Test Gargantext
Login : gargantua
Password : autnagrag
Enjoy :)
#!/bin/dash
# TODO do apt-get install --force-yes --force-yes
#postgresql3.4-server-dev
#+libxml2-dev
sudo apt-get install --force-yes postgresql
sudo apt-get install --force-yes postgresql-contrib
sudo apt-get install --force-yes rabbitmq-server
sudo apt-get install --force-yes tmux
sudo apt-get install --force-yes uwsgi uwsgi-plugin-python3
#apt-get install --force-yes python-virtualenv
sudo apt-get install --force-yes libpng12-dev
sudo apt-get install --force-yes libpng-dev
sudo apt-get install --force-yes libfreetype6-dev
sudo apt-get install --force-yes python-dev
sudo apt-get install --force-yes libpq-dev
sudo apt-get install --force-yes libpq-dev
#apt-get build-dep python-matplotlib
#apt-get install --force-yes python-matplotlib
#Paquets Debian a installer
# easy_install --force-yes -U distribute (matplotlib)
#lxml
sudo apt-get install --force-yes libffi-dev
sudo apt-get install --force-yes libxml2-dev
sudo apt-get install --force-yes libxslt1-dev
# ipython readline
sudo apt-get install --force-yes libncurses5-dev
sudo apt-get install --force-yes pandoc
# scipy:
sudo apt-get install --force-yes gfortran
sudo apt-get install --force-yes libopenblas-dev
sudo apt-get install --force-yes liblapack-dev
#nlpserver
sudo apt-get install --force-yes libgflags-dev
sudo apt-get install --force-yes libgoogle-glog-dev
# MElt
# soon
## SERVER Configuration
# server configuration
sudo apt-get install --force-yes nginx
# UWSGI with pcre support
sudo apt-get install --force-yes libpcre3 libpcre3-dev
sudo apt-get install --force-yes python3-pip
#pip3 install --force-yes uwsgi
#!/bin/bash
##MAINTAINER ISCPIF <alexandre.delanoe@iscpif.fr>
#
#git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
# && cd /srv/gargantext \
# && git fetch origin refactoring-alex \
# && git checkout refactoring-alex
#
#cd /srv/gargantext/install \
# && /usr/bin/virtualenv --py=/usr/bin/python3.5 /srv/env_3-5 \
# && /bin/bash -c 'source /srv/env_3-5/bin/activate' \
# && /bin/bash -c '/srv/env_3-5/bin/pip install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1' \
# && /bin/bash -c '/srv/env_3-5/bin/pip install -r /srv/gargantext/install/python/requirements.txt' \
#
## INSTALL MAIN DEPENDENCIES
cd /tmp && wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& chown -R gargantua:gargantua /srv/gargantext_lib
## End of configuration
## be sure that postgres is running
#cd /srv/gargantext && /bin/bash -c 'source /srv/bin/env_3-5/bin/activate' \
# && /srv/gargantext/manage.py shell < /srv/gargantext/init.py
#
echo "Gargantua: END of the installation of Gargantext"
#!/bin/bash
# ## CONFIGURE POSTGRESQL
psql -c "CREATE user gargantua WITH PASSWORD 'C8kdcUrAQy66U'" && createdb -O gargantua gargandb
#!/bin/bash
#MAINTAINER ISCPIF <alexandre.delanoe@iscpif.fr>
apt-get update && \
apt-get install -y \
apt-utils ca-certificates locales \
sudo aptitude gcc g++ wget git postgresql-9.5 vim
### Configure timezone and locale
echo "Europe/Paris" > /etc/timezone && \
dpkg-reconfigure -f noninteractive tzdata && \
sed -i -e 's/# en_GB.UTF-8 UTF-8/en_GB.UTF-8 UTF-8/' /etc/locale.gen && \
sed -i -e 's/# fr_FR.UTF-8 UTF-8/fr_FR.UTF-8 UTF-8/' /etc/locale.gen && \
echo 'LANG="fr_FR.UTF-8"' > /etc/default/locale && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=fr_FR.UTF-8
## PROD VERSION OF GARGANTEXT
# apt-get install -y uwsgi nginx uwsgi-plugin-python rabbitmq-server
### CREATE USER and adding it to sudo
## USER gargantua cannot not connect with password but SSH key
adduser --disabled-password --gecos "" gargantua \
&& adduser gargantua sudo \
&& echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# addgroup gargantext here with specific users
## Install Database, main dependencies and Python
## (installing some Debian version before pip to get dependencies)
apt-get update && apt-get install -y \
postgresql-server-dev-9.5 libpq-dev libxml2 \
libxml2-dev xml-core libgfortran-5-dev \
virtualenv python3-virtualenv \
python3.4 python3.4-dev \
python3.5 python3.5-dev \
python3-six python3-numpy python3-setuptools \ # for numpy, pandas
python3-numexpr \ # for numpy performance
libxml2-dev libxslt-dev # for lxml
#if [[ -e "/srv/gargantext" ]]
#rm -rf /srv/gargantext /srv/env_3-5
for dir in "/srv/gargantext"\
"/srv/gargantext_lib"\
"/srv/gargantext_static"\
"/srv/gargantext_media"\
"/srv/env_3-5"\
"/var/www/gargantext"; do \
mkdir $dir
chown gargantua:gargantua $dir
done
echo "Root: END of the installation of Gargantext by Root."
# Gargantext Installation
You will find here a Dockerfile and docker-compose script
that builds a development container for Gargantex
along with a PostgreSQL 9.5.X server.
* Install Docker
On your host machine, you need Docker.
[Installation guide details](https://docs.docker.com/engine/installation/#installation)
* clone the gargantex repository and get the refactoring branch
```
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext
cd /srv/gargantext
git fetch origin refactoring
git checkout refactoring
Install additionnal dependencies into gargantex_lib
```
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& sudo tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
```
* Developers: create your own branch based on refactoring
see [CHANGELOG](CHANGELOG.md) for migrations and branches name
```
git checkout-b username-refactoring refactoring
```
Build the docker images:
- a database container
- a gargantext container
```
cd /srv/gargantext/install/
docker-compose build -t gargantex /srv/gargantext/install/docker/config/
docker-compose run web bundle install
```
Finally, setup the PostgreSQL database with the following commands.
```
docker-compose run web bundle exec rake db:create
docker-compose run web bundle exec rake db:migrate
docker-compose run web bundle exec rake db:seed
```
## OS
## Debian Stretch
See install/debian
If you do not have a Debian environment, then install docker and
execute /srv/gargantext/install/docker/dev/install.sh
You need a docker image.
All the steps are explained in [docker/dev/install.sh](docker/dev/install.sh) (not automatic yet).
Bug reports are welcome.
default: &default
adapter: postgresql
encoding: unicode
pool: 5
host: postgres
port: 5432
username: gargantex
password: gargantex
development:
database: gargantex_dev
test:
database: gargantext_test
postgres:
image: "postgres:9.5"
volumes_from:
- data
expose:
- 5432
environment:
POSTGRES_PASSWORD: gargantext
POSTGRES_USER: gargantua
web:
command: python manage.py runserver 0.0.0.0:8000
build: .
volumes:
- .:/srv/
ports:
- "8000:8000"
depends_on:
- db
build: .
ports:
- "3000:3000"
links:
- postgres
volumes:
- ../:/srv/
volumes_from:
- data
environment:
HOST: 0.0.0.0
PORT: 3000
data:
image: gargantext
command: echo 'Data Container for PostgreSQL and Initial Data'
volumes:
- /var/lib/postgresql/data
- /bundler
#!/bin/bash
sudo docker export $(sudo docker ps -l | awk '{print $1}' | grep -v CONTAINER | head -n 1) > /tmp/gargantext_docker_image.tar
# To import the docker
#sudo docker import - gargantext:latest < data.tar
#sudo cat data.tar | docker import - gargantext
postgres:
image: "postgres:9.4"
volumes_from:
- data
expose:
- 5432
environment:
POSTGRES_PASSWORD: gargantext
POSTGRES_USER: gargantua
web:
build: .
ports:
- "3000:3000"
links:
- postgres
volumes:
- ../:/app/
volumes_from:
- data
environment:
HOST: 0.0.0.0
PORT: 3000
data:
image: cogniteev/echo
command: echo 'Data Container for PostgreSQL and Bundler'
volumes:
- /var/lib/postgresql/data
- /bundler
#!/bin/bash
sudo docker run -i -p 8000:8000 \
-v /srv:/srv \
-t gargantext:latest \
/bin/bash
#!/bin/bash
# Main user of Gargantext is Gargantua (role of Pantagruel soon)!
#sudo adduser --disabled-password --gecos "" gargantua
#######################################################################
# ____ _
# | _ \ ___ ___| | _____ _ __
# | | | |/ _ \ / __| |/ / _ \ '__|
# | |_| | (_) | (__| < __/ |
# |____/ \___/ \___|_|\_\___|_|
#
######################################################################
sudo docker build -t gargantext .
# OR
# cd /tmp
# wget http://dl.gargantext.org/gargantext_docker_image.tar \
# && sudo docker import - gargantext:latest < gargantext_docker_image.tar
function do_cker {
#sudo docker run -d -p 8000:8000 \
sudo docker run -d \
-v /srv2:/srv \
-v /home/alexandre:/home/alexandre \
-t gargantext:latest \
/bin/bash $1
}
#######################################################################
# _____ _ _
# | ___|__ | | __| | ___ _ __ ___
# | |_ / _ \| |/ _` |/ _ \ '__/ __|
# | _| (_) | | (_| | __/ | \__ \
# |_| \___/|_|\__,_|\___|_| |___/
#
#######################################################################
### Create directories in /srv
# Linux only
function create_folders {
for dir in "/srv/gargantext"\
"/srv/gargantext_lib"\
"/srv/gargantext_static"\
"/srv/gargantext_media"\
"/srv/gargantext_data"\
"/srv/env_3-5"; do \
sudo mkdir -p $dir ;\
sudo chown gargantua:gargantua $dir ; \
done;\
sudo chown -R postgres:postgres /srv/gargantext_data/
}
#do_cker "create_folders"
#NOPE
function git_config {
### TODO (soon) : git clone https://gogs.iscpif.fr/gargantext.git
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin refactoring \
&& git checkout refactoring
}
#su gargantua -c git_config #NOPE
#######################################################################
## ____ _
## | _ \ ___ ___| |_ __ _ _ __ ___ ___
## | |_) / _ \/ __| __/ _` | '__/ _ \/ __|
## | __/ (_) \__ \ || (_| | | | __/\__ \
## |_| \___/|___/\__\__, |_| \___||___/
## |___/
#######################################################################
#NOPE
function postgres_config {
/usr/lib/postgresql/9.5/bin/initdb -D /srv/gargantext_data/
}
#do_cker "su postgres -c postgres_config"
function postgres_create_db {
sudo /etc/init.d/postgresql start \
&& psql -c "CREATE user gargantua WITH PASSWORD 'C8kdcUrAQy66U'" \
&& createdb -O gargantua gargandb \
&& echo "Root: END of the installation of Gargantexts Database by postgres."
}
#do_cker postgres_create_db
#######################################################################
## _ _ _ _ _
## | | (_) |__ _ __ __ _(_)_ __(_) ___ ___
## | | | | '_ \| '__/ _` | | '__| |/ _ \/ __|
## | |___| | |_) | | | (_| | | | | | __/\__ \
## |_____|_|_.__/|_| \__,_|_|_| |_|\___||___/
##
#######################################################################
#
#######################################################################
### INSTALL MAIN DEPENDENCIES
#######################################################################
###
#### Installing pip version of python libs
#
function install_python_env {
/usr/bin/virtualenv --py=/usr/bin/python3.5 /srv/env_3-52 \
&& /bin/bash -c 'source /srv/env_3-52/bin/activate' \
&& /bin/bash -c '/srv/env_3-52/bin/pip install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1' \
&& /bin/bash -c '/srv/env_3-52/bin/pip install -r /srv/gargantext/install/python/requirements.txt'
}
#do_cker "su gargantua -c install_python_env"
#######################################################################
function init_gargantext {
echo "TODO script pour peupler la base"
}
#do_cker "su gargantua -c init_gargantext"
#######################################################################
### GET CONFIG FILES
function get_libs {
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo "Libs installed"
}
#do_cker get_libs
###########################################################
# ____ ____ _____ __ ___ #
# / ___| __ _ _ __ / ___| __ _ _ _|_ _|__\ \/ / |_ #
# | | _ / _` | '__| | _ / _` | '_ \| |/ _ \\ /| __| #
# | |_| | (_| | | | |_| | (_| | | | | | __// \| |_ #
# \____|\__,_|_| \____|\__,_|_| |_|_|\___/_/\_\\__| #
# #
# Gargamelle WEB
###########################################################
######################################################################
#Build an image starting with debian:stretch image
# wich contains all the source code of the app
FROM debian:stretch
MAINTAINER ISCPIF <alexandre.delanoe@iscpif.fr>
######################################################################
#Add the current image into /srv/
ADD . /srv/
#Set the working directory to /srv
WORKDIR /srv/
#Install the debian dependencies
#as root
MAINTAINER ISCPIF <gargantext@iscpif.fr>
# Configure global ENV with deb dependencies
# Configure local ENV requirements
########################################################################
ENV DEBIAN_FRONTEND noninteractive
USER root
#declare 2 environnement
ENV GG_ROOT /srv/gargantext
ENV PYTHON_ENV /srv/env_3-5
### Update and install base dependencies
RUN apt-get update && \
apt-get install -y \
apt-utils ca-certificates locales \
sudo aptitude gcc g++ wget git postgresql-9.5 vim \
build-essential make
sudo aptitude gcc g++ wget git vim \
build-essential make \
postgresql-9.5 postgresql-client-9.5 postgresql-contrib-9.5 \
postgresql-server-dev-9.5 libpq-dev libxml2 \
postgresql-9.5 postgresql-client-9.5 postgresql-contrib-9.5
RUN echo "############ DEBIAN LIBS ###############"
### Configure timezone and locale
RUN echo "Europe/Paris" > /etc/timezone && \
dpkg-reconfigure -f noninteractive tzdata && \
......@@ -38,43 +34,54 @@ RUN echo "Europe/Paris" > /etc/timezone && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=fr_FR.UTF-8
### Install Database, main dependencies and Python
### (installing some Debian version before pip to get dependencies)
RUN echo "########### LOCALES & TZ #################"
### Install main dependencies and python packages based on Debian distrib
RUN apt-get update && apt-get install -y \
postgresql-server-dev-9.5 libpq-dev libxml2 \
libxml2-dev xml-core libgfortran-5-dev \
virtualenv python3-virtualenv \
python3.5 python3-dev \
libpq-dev \
python3.5 \
python3-dev \
python3-six python3-numpy python3-setuptools \
# ^for numpy, pandas
# ^for numpy, pandas and numpyperf
python3-numexpr \
# ^ for numpy performance
#python dependencies
python3-pip \
# for lxml
libxml2-dev libxslt-dev
# ^ for lxml
#libxslt1-dev zlib1g-dev
RUN echo "############# PYTHON DEPENDENCIES ###############"
#UPDATE AND CLEAN
RUN apt-get update && apt-get autoclean &&\
rm -rf /var/lib/apt/lists/*
#NB: removing /var/lib will avoid to significantly fill up your /var/ folder on your native system
########################################################################
### PYTHON ENVIRONNEMENT (as ROOT)
########################################################################
RUN apt-get install -qy python3.5
RUN apt-get install -qy python3-pip
RUN python3-pip install -r /srv/gargantex/install/python/requirements.txt
RUN adduser --disabled-password --gecos "" gargantua
### PROD VERSION OF GARGANTEXT ONLY
#RUN apt-get install -y uwsgi nginx uwsgi-plugin-python rabbitmq-server
RUN pip3 install virtualenv
RUN virtualenv /env_3-5
RUN echo 'alias venv="source /env_3-5/bin/activate"' >> ~/.bashrc
# CONFIG FILES
ADD requirements.txt /
ADD psql_configure.sh /
ADD django_configure.sh /
## CREATE USER and adding it to sudo >> docker-compose build
## TODO ask user for password
#RUN adduser --disabled-password --gecos "" gargantua
RUN . /env_3-5/bin/activate; pip3 install -r requirements.txt; \
pip3 install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1; \
python3 -m nltk.downloader averaged_perceptron_tagger;
#RUN apt-get install -y sudo && adduser gargantua sudo \
# && echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
RUN chown gargantua:gargantua -R /env_3-5
########################################################################
### POSTGRESQL DATA (as ROOT)
########################################################################
#######################################################################
### CONFIGURE POSTGRESQL
#######################################################################
#in docker database
#RUN sed -iP 's%^data_directory.*%data_directory = '\/srv\/gargantext_data'%' /etc/postgresql/9.5/main/postgresql.conf
######################################################################
RUN sed -iP "s%^data_directory.*%data_directory = \'\/srv\/gargandata\'%" /etc/postgresql/9.5/main/postgresql.conf
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.5/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.5/main/postgresql.conf
#INSTALLATION AND CONFIG python and postgresql module
COPY . /install
CMD [./install/python/configure, ./install/postgres/configure]
EXPOSE 5432 8000
VOLUME ["/srv/",]
#!/bin/bash
#configure the base image gargamelle
docker build -t gargamelle:latest ./gargamelle
#2 option with this image:
# configure the container
# run the image with the app in it
#!/bin/bash
PATH="/srv/gargantext/install/gargamelle"
#./folders_configure.sh;
#./psql_configure.sh;
/bin/bash "$PATH/django_configure.sh";
/bin/bash -c "Configuration Ok""
#!/bin/bash
##################################################
# __| |(_) __ _ _ __ __ _ ___
# / _` || |/ _` | '_ \ / _` |/ _ \
# | (_| || | (_| | | | | (_| | (_) |
# \__,_|/ |\__,_|_| |_|\__, |\___/
# |__/ |___/
##################################################
#configure django migrations
##################################################
echo "Starting Postgres"
/usr/sbin/service postgresql start
/bin/su gargantua -c 'source /env_3-5/bin/activate \
&& ./srv/gargantext/manage.py makemigrations \
&& ./srv/gargantext/manage.py migrate \
&& ./srv/gargantext/dbmigrate.py \
&& ./srv/gargantext/dbmigrate.py \
&& ./srv/gargantext/dbmigrate.py;'
/usr/sbin/service postgresql stop
#!/bin/bash
chown -R gargantua:gargantua /srv/gargantext
#!/bin/bash
#######################################################################
## ____ _
## | _ \ ___ ___| |_ __ _ _ __ ___ ___
## ____ _
## | _ \ ___ ___| |_ __ _ _ __ ___ ___
## | |_) / _ \/ __| __/ _` | '__/ _ \/ __|
## | __/ (_) \__ \ || (_| | | | __/\__ \
## |_| \___/|___/\__\__, |_| \___||___/
## |___/
## |___/
#######################################################################
su postgres -c 'pg_dropcluster 9.4 main --stop'
#rm -rf /srv/gargantext_data && mkdir /srv/gargantext_data && chown postgres:postgres /srv/gargantext_data
su postgres -c '/usr/lib/postgresql/9.5/bin/initdb -D /srv/gargantext_data/'
su postgres -c '/usr/lib/postgresql/9.5/bin/pg_ctl -D /srv/gargantext_data/ -l journal_applicatif start'
service postgresql stop
#su postgres -c 'pg_createcluster -D /srv/gargantext_data 9.5 main '
#su postgres -c 'pg_ctlcluster -D /srv/gargantext_data 9.5 main start '
#su postgres -c 'pg_ctlcluster 9.5 main start'
su postgres -c 'pg_dropcluster 9.5 main --stop'
#done in docker but redoing it
if [[ -e "/srv/gargandata" ]]; then
rm -rf /srv/gargandata/*
else
mkdir /srv/gargandata;
chown -R postgres:postgres /srv/gargandata
fi
su postgres -c '/usr/lib/postgresql/9.5/bin/initdb -D /srv/gargandata/'
su postgres -c '/usr/lib/postgresql/9.5/bin/pg_ctl -D /srv/gargandata/ -l journal_applicatif start'
#su postgres -c 'pg_createcluster -D /srv/gargandata 9.5 main '
#su postgres -c 'pg_ctlcluster -D /srv/gargandata 9.5 main start '
service postgresql start
......@@ -24,5 +34,4 @@ su postgres -c "psql -c \"CREATE user gargantua WITH PASSWORD 'C8kdcUrAQy66U'\""
su postgres -c "createdb -O gargantua gargandb"
echo "Postgres configured"
service postgresql stop
......@@ -29,5 +29,8 @@ networkx==1.11
pandas==0.18.0
six==1.10.0
lxml==3.5.0
requests-futures==0.9.7
bs4==0.0.1
requests==2.10.0
#testing github
#-e git://github.com/zzzeek/sqlalchemy.git@rel_1_1
gargantua,contact@gargantext,autnagrag,
#!/usr/bin/bash
echo "Adding user gargantua";
sudo adduser --disabled-password --gecos "" gargantua;
echo "Creating the environnement into /srv/";
for dir in "/srv/gargantext" "/srv/gargantext_lib" "/srv/gargantext_static" "/srv/gargantext_media""/srv/env_3-5"; do
sudo mkdir -p $dir ;
sudo chown gargantua:gargantua $dir ;
done;
echo "Downloading the libs";
wget http://dl.gargantext.org/gargantext_lib.tar.bz2 \
&& tar xvjf gargantext_lib.tar.bz2 -o /srv/gargantext_lib \
&& sudo chown -R gargantua:gargantua /srv/gargantext_lib \
&& echo "Libs installed";
#cp ~/.ssh/id_rsa.pub id_rsa.pub
echo "Cloning the repo";
git clone ssh://gitolite@delanoe.org:1979/gargantext /srv/gargantext \
&& cd /srv/gargantext \
&& git fetch origin refactoring \
&& git checkout refactoring \
echo "Currently on /srv/gargantext refactoring branch";
-- ____
-- / ___|
-- | | _
-- | |_| |
-- \____|arganTexT
----------------------------------------------------------------------
-- Gargantext optimization of Database --
----------------------------------------------------------------------
--> Manual optimization with indexes according to usages
-- Weakness and Strengths of indexes:
--> it can slow down the insertion(s)
--> it can speed up the selection(s)
--> Conventions for this document:
--> indexes commented already have been created
--> indexes not commented have not been created yet
----------------------------------------------------------------------
-- Retrieve Nodes
----------------------------------------------------------------------
-- create INDEX on nodes (user_id, typename, parent_id) ;
-- create INDEX on nodes_hyperdata (node_id, key);
-- create INDEX on ngrams (id, n) ;
-- create INDEX on ngrams (n) ;
-- create INDEX on nodes_ngrams (node_id, ngram_id) ;
-- create INDEX on nodes_ngrams (node_id) ;
-- create INDEX on nodes_ngrams (ngram_id) ;
-- create INDEX on nodes_ngrams_ngrams (node_id, ngram1_id, ngram2_id) ;
-- create INDEX on nodes_ngrams_ngrams (node_id) ;
-- create INDEX on nodes_ngrams_ngrams (ngram1_id) ;
-- create INDEX on nodes_ngrams_ngrams (ngram2_id) ;
----------------------------------------------------------------------
-- DELETE optimization of Nodes -- todo on dev
-- create INDEX on nodes_nodes_ngrams (node1_id);
-- create INDEX on nodes_nodes_ngrams (node2_id);
-- create INDEX on nodes_nodes (node1_id, node2_id);
-- Maybe needed soon:
-- create INDEX on nodes_nodes_ngrams (node1_id, node2_id);
----------------------------------------------------------------------
-- Analytics
-- create INDEX on nodes_hyperdata (node_id,value_utc); -- remove ?
-- create INDEX on nodes_hyperdata (node_id,key,value_utc);
-- create INDEX on nodes_hyperdata (node_id,key,value_int);
-- create INDEX on nodes_hyperdata (node_id,key,value_flt);
-- create INDEX on nodes_hyperdata (node_id,key,value_str);
----------------------------------------------------------------------
----------------------------------------------------------------------
create index on nodes using GIN (hyperdata);
----------------------------------------------------------------------
#!/bin/bash
ENV="/srv/env_3-5"
/usr/bin/virtualenv --py=/usr/bin/python3.5 $ENV \
&& /bin/bash -c "source ${ENV}/bin/activate" \
&& /bin/bash -c "${ENV}/bin/pip install git+https://github.com/zzzeek/sqlalchemy.git@rel_1_1" \
&& /bin/bash -c "${ENV}/bin/pip install -r /srv/gargantext/install/python/requirements.txt"
#!/usr/bin/bash
#enter the Image
/srv/gargantext/install/docker/enterGargantextImage
#start postgresql
service postgresql start
#change to user
su gargantua
#activate the virtualenv
source /srv/env_3-5/bin/activate
#go to gargantext srv
cd /srv/gargantext/manage.py runserver 0.0.0.0:8000
#!/usr/bin/bash
/srv/gargantext/install/docker/enterGargantextImage
/srv/gargantext/install/python/configure
/srv/gargantext/install/postgres/configure
service postgresql start
source /srv/env_3-5/bin/activate
python /srv/gargantext/dbmigrate.py
/srv/gargantext/manage.py makemigrations
/srv/gargantext/manage.py migrate
python /srv/gargantext/dbmigrate.py
python /srv/gargantext/dbmigrate.py
python /srv/gargantext/init_accounts.py /srv/gargantext/install/init/account.csv
/srv/gargantext_lib/js/libs
\ No newline at end of file
......@@ -22,13 +22,6 @@ var getCookie = function(name) {
}
return cookieValue;
}
var csrftoken = getCookie('csrftoken');
$.ajaxSetup({
beforeSend: function(xhr, settings) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
});
// Resource class
var Resource = function(url_path) {
......@@ -65,6 +58,9 @@ var Resource = function(url_path) {
$.ajax({
url: url,
type: 'GET',
beforeSend: function(xhr) {
xhr.setRequestHeader("X-CSRFToken", getCookie("csrftoken"));
},
success: callback
});
};
......@@ -73,30 +69,32 @@ var Resource = function(url_path) {
$.ajax({
url: url_path + '/' + id,
type: 'PATCH',
beforeSend: function(xhr) {
xhr.setRequestHeader("X-CSRFToken", getCookie("csrftoken"));
},
success: callback
});
};
// remove an item
this.delete = this.remove = function(id, callback) {
if (id.id != undefined) {
id = id.id;
}
$.ajax({
url: url_path + '/' + id,
type: 'DELETE',
beforeSend: function(xhr) {
xhr.setRequestHeader("X-CSRFToken", getCookie("csrftoken"));
},
success: callback
});
};
// add an item
this.add = this.append = function(value, callback) {
this.add = this.append = function(id, callback) {
$.ajax({
// todo define id
url: url_path + '/' + id,
type: 'POST',
beforeSend: function(xhr) {
xhr.setRequestHeader("X-CSRFToken", getCookie("csrftoken"));
},
success: callback
});
};
......
Remarques sur l'intégration de tina
===================================
### Pour info: procédure suivie
Je copie ici les 2 commandes utilisées pour rendre visible comment a été faite la fusion du git de tina dans celui de garg.
Grace à cette méthode, quand on clonera le dépot gargantext, on obtiendra aussi les contenus du dépôt tina dans notre sous-dossier **`static/lib/graphExplorer`**.
**NB**
Il n'est pas nécessaire de refaire cette procédure, dorénavant les fichiers restent là dans le sous-dossier.
1. on a ajouté le dépôt extérieur de graphExplorer comme si c'était une remote normale
```
git remote add dependancy_graphExplorer_garg https://gogs.iscpif.fr/humanities/graphExplorer_garg
```
2. on a lancé la commande `subtree` avec cette remote, pour récupérer le dépôt tina et le placer dans garg dans le dossier indiqué par l'option `prefix`
```
git subtree add --prefix=static/lib/graphExplorer dependancy_graphExplorer_garg master
```
Résultat:
```
# git fetch dependancy_graphExplorer_garg master
# (...)
# Receiving objects: 100% (544/544), 1.72 MiB | 0 bytes/s, done.
# Resolving deltas: 100% (307/307), done.
# From https://gogs.iscpif.fr/humanities/graphExplorer_garg
# * branch master -> FETCH_HEAD
# * [new branch] master -> dependancy_graphExplorer_garg/master
# Added dir 'static/lib/graphExplorer'
```
3. au passage la même commande a aussi créé le commit suivant dans ma branche gargantext
```
# commit b8d7f061f8c236bad390eb968d153fd6729b7434
# Merge: 3bfb707 d256049
# Author: rloth <romain.loth@iscpif.fr>
# Date: Thu Jul 7 16:01:46 2016 +0200
#
# Add 'static/lib/graphExplorer/' from commit 'd256049'
```
(ici le commit *d256049* indique le point où en était le dépôt tina quand il a été copié)
### Utilisation en développement quotidien
Il n'y a plus rien de particulier à faire. Le dossier contient les éléments de tina qui nous sont nécessaires. On peut ignorer l'existence du subtree et travailler normalement, dans ce dossier et ailleurs.
**=> nos opérations de commit / pull quotidiennes ne sont pas affectées**
Il n'est pas non plus nécessaire de prendre en compte la présence ou l'absence de la "remote" (lien extérieur) dans son travail.
### Utilisation avancée: pour propager les résultats entre dépôts
A présent le dépôt tina peut être vu comme une sorte de dépôt upstream circonscrit à un seul sous-dossier **`static/lib/graphExplorer`** !
Mais si des changements interviennent dans le dépôt tina, ils ne seront pas automatiquement intégrés dans sa copie intégrée à garg. Pour opérer des A/R entre les dépôts le plus simple est une 1ère fois d'ajouter le même pointeur extérieur :
```
git remote add dependancy_graphExplorer_garg https://gogs.iscpif.fr/humanities/graphExplorer_garg
```
A partir de là, il devient très simple de faire des opérations push/pull entre dépôts si besoin est..
1. Récupération de mises à jour tina => garg.
Pour intégrer des changements upstream de tina vers garg, il suffit de lancer la commande suivante:
```
git subtree pull --prefix=static/lib/graphExplorer dependancy_graphExplorer_garg master --squash
```
2. Inversement, les changements effectués dans le dossier **`static/lib/graphExplorer`** par les développeurs garg peuvent aussi être poussés du dépôt garg vers le dépôt tina par un subtree push
```
git subtree push --prefix=static/lib/graphExplorer dependancy_graphExplorer_garg master
```
<?php
header('Content-Type: application/json');
include("DirectoryScanner.php");
$projectFolderPat = dirname(dirname(getcwd())) . "/";
$instance = new scanTree($projectFolderPat);
$instance->getDirectoryTree("data");
//pr(var_dump($instance->folders));
$output = array();
$output["folders"] = $instance->folders;
$output["gexf_idfolder"] = $instance->gexf_folder;
echo json_encode($output);
// ** Debug Functions: **
function br() {
echo "----------<br>";
}
function pr($msg) {
echo $msg . "<br>";
}
?>
<?php
class scanTree {
public $root;
public $folders = array();
public $gexf_folder = array();
public function __construct($rootpath = "") {
$this->root = $rootpath;
}
public function getDirectoryTree($dir) {
$folder = array();
$dbs = array();
$gexfs = array();
$dataFolder = $this->root . $dir;
$files = scandir($dataFolder);
foreach ($files as $f) {
if ($f != "." and $f != ".." and $f[strlen($f) - 1] != "~") {
if (is_dir($dataFolder . "/" . $f)) {
//pr("Dir: ".$f);
$subfolder = $f;
$this->getDirectoryTree($dir . "/" . $subfolder);
} else {
//pr("File: ".$f);
if ((strpos($f, '.gexf')))
array_push($gexfs, $f);
if ((strpos($f, '.db')) or (strpos($f, '.sqlite')) or (strpos($f, '.sqlite3')))
array_push($dbs, $f);
if (!$folder[$dir]["gexf"] or !$folder[$dir]["dbs"])
$folder[$dir] = array();
$folder[$dir]["gexf"] = $gexfs;
$folder[$dir]["dbs"] = $dbs;
if ((strpos($f, '.gexf'))) {
$this->gexf_folder[$dir . "/" . $f] = "";
}
}
}
}
if ($folder[$dir]["gexf"]) {
foreach ($folder[$dir]["gexf"] as $g) {
$this->gexf_folder[$dir . "/" . $g] = count($this->folders);
}
}
array_push($this->folders, $folder);
}
}
?>
<?php
echo 'toto';
?>
\ No newline at end of file
<?php
// default informations
$thedb = $graphdb;
$gexf=$_GET["gexf"];
$max_item_displayed=6;
$type = $_GET["type"];
$TITLE="ISITITLE";
$query = str_replace( '__and__', '&', $_GET["query"] );
$elems = json_decode($query);
$table = "";
$column = "";
$id="";
$twjs="API_CNRS/"; // submod path of TinaWebJS
if($type=="social"){
$table = "ISIAUTHOR";
$column = "data";
$id = "id";
$restriction='';
$factor=10;// factor for normalisation of stars
}
if($type=="semantic"){
$table = $_GET["index"];//"ISItermsfirstindexing";
$column = "data";
$id = "id";
$restriction='';
$factor=10;
}
$restriction='';
$factor=10;
$sql="";
//////////
if (count($elems)==1){// un seul mot est sélectionné, on compte les mots multiples
$sql = 'SELECT count(*),'.$id.'
FROM '.$table.' where (';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
#$querynotparsed=$sql;#####
$sql = substr($sql, 0, -3);
$sql = str_replace( ' & ', '" OR '.$column.'="', $sql );
$sql.=')'.$restriction.'
GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000';
}else{// on compte une seule fois un mot dans un article
$factor=ceil(count($elems)/5); //les scores sont moins haut
$sql='';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
$sql=substr($sql, 0, -3);
$sql='SELECT count(*),id,data FROM (SELECT *
FROM '.$table.' where ('.$sql.')'.$restriction.'
group by id,data) GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000';
}
$wos_ids = array();
$sum=0;
// echo "<br>";
// echo "$sql";
//The final query!
// array of all relevant documents with score
foreach ($base->query($sql) as $row) {
// on pondère le score par le nombre de termes mentionnés par l'article
//$num_rows = $result->numRows();
$wos_ids[$row[$id]] = $row["count(*)"];
$sum = $row["count(*)"] +$sum;
}
// /// nombre de document associés $related
$total_count=0;
$count_max=500;
$number_doc=count($wos_ids);
$count=0;
$all_terms_from_selected_projects=array();// list of terms for the top 6 project selected
// to filter under some conditions
$to_display=true;
$count=0;
foreach ($wos_ids as $id => $score) {
if ($total_count<$count_max) {
// retrieve publication year
if ($to_display){
$total_count+=1;
if ($count<=$max_item_displayed){
$count+=1;
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$external_link="<a href=http://google.com/webhp?#q=".urlencode('"'.$row['data'].'"')." target=blank>".' <img width=15px src="'.$twjs.'img/google.png"></a>';
$output.="<li title='".$score."'>";
$output.=$external_link.imagestar($score,$factor,$twjs).' ';
$output.='<a href="JavaScript:newPopup(\''.$twjs.'default_doc_details.php?gexf='.urlencode($gexf).'&index='.$table.'&query='.urlencode($query).'&type='.urlencode($_GET["type"]).'&id='.$id.' \')">'.$row['data']." </a> ";
// echo '<a href="JavaScript:newPopup(\''.$twjs.'default_doc_details.php?gexf='.urlencode($gexf).'&index='.$table.'&query='.urlencode($query).'&type='.urlencode($_GET["type"]).'&id='.$id.' \')">'.$row['data']." </a> ";
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=($row['data']).', ';
}
$output = rtrim($output, ", ");
$output.="</li><br>";
}
}
} else{
continue;
}
}
if ($total_count<$count_max){
$related .= $total_count;
}else{
$related .= ">".$count_max;
}
$output .= "</ul>"; #####
// // if (($project_folder=='nci')&&(count($elems)<$max_selection_size)){
// // // for NCI we compare the impact and novelty score making the difference if there are more than 4 terms selected
// // $news='';//new terms
// // $terms_from_selected_projects=array_unique($all_terms_from_selected_projects);
// // if(count($terms_from_selected_projects)>3){
// // $diff=array();
// // foreach ($terms_from_selected_projects as $key => $term) {
// // $sql= "select count(*),ISIterms.id, ISIterms.data from ISIterms join ISIpubdate on (ISIterms.id=ISIpubdate.id AND ISIpubdate.data=2011 AND ISIterms.data='".$term."') group by ISIterms.data";
// // $nov=0;
// // foreach ($corporadb->query($sql) as $row) {
// // $nov=$row['count(*)'];
// // }
// // $sql= "select count(*),ISIterms.id, ISIterms.data from ISIterms join ISIpubdate on (ISIterms.id=ISIpubdate.id AND ISIpubdate.data=2012 AND ISIterms.data='".$term."') group by ISIterms.data";
// // $imp=0;
// // foreach ($corporadb->query($sql) as $row) {
// // $imp=$row['count(*)'];
// // }
// // $diff[$term]=info($nov,$imp); //positive si c'est un term novelty, negatif si c'est un terme impact.
// // //echo $term.'-nov: '.$nov.'- imp:'.$imp.'<br/>';//'-info'.$diff[$term].
// // }
// // if (true){
// // arsort($diff);
// // $res=array_keys($diff);
// // //echo implode(', ', $res);
// // $nov_string='';
// // for ($i=0;$i<$top_displayed;$i++){
// // // on récupère les titres du document qui a le plus for impact
// // $sql="SELECT ISIterms.id,ISIC1_1.data,count(*) from ISIterms,ISIpubdate,ISIC1_1 where ISIterms.data='".$res[$i]."' AND ISIterms.id=ISIpubdate.id AND ISIterms.id=ISIC1_1.id AND ISIpubdate.data='2011' group by ISIterms.id ORDER BY RANDOM() limit 1";
// // //on récupère les id associés.
// // foreach ($corporadb->query($sql) as $row){
// // $sql2='SELECT ISIpubdate.id,ISIC1_1.data from ISIpubdate,ISIC1_1 where ISIC1_1.data="'.$row['data'].'" AND ISIpubdate.id=ISIC1_1.id AND ISIpubdate.data="2013" limit 1';
// // //echo $sql2;
// // foreach ($corporadb->query($sql2) as $row2){
// // $nov_string.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($graphdb).'&gexf='.urlencode($gexf).'&query='.urlencode('["'.$res[$i].'"]').'&type='.urlencode($_GET["type"]).'&id='.$row2['id'].' \')">'.$res[$i]."</a>, ";
// // }
// // }
// // }
// // $news.='<br/><b><font color="#FF0066">Top '.$top_displayed.' Novelty related terms </font></b>'.$nov_string.'<br/>';
// // asort($diff);
// // $res=array_keys($diff);
// // $res_string='';
// // for ($i=0;$i<$top_displayed;$i++){
// // // on récupère les titres du document qui a le plus for impact
// // $sql="SELECT ISIterms.id,ISIC1_1.data,count(*) from ISIterms,ISIpubdate,ISIC1_1 where ISIterms.data='".$res[$i]."' AND ISIterms.id=ISIpubdate.id AND ISIterms.id=ISIC1_1.id AND ISIpubdate.data='2012' group by ISIterms.id ORDER BY RANDOM()limit 1";
// // //on récupère les id associés.
// // foreach ($corporadb->query($sql) as $row){
// // $sql2='SELECT ISIpubdate.id,ISIC1_1.data from ISIpubdate,ISIC1_1 where ISIC1_1.data="'.$row['data'].'" AND ISIpubdate.id=ISIC1_1.id AND ISIpubdate.data="2013" limit 1';
// // //echo $sql2;
// // foreach ($corporadb->query($sql2) as $row2){
// // $res_string.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($graphdb).'&gexf='.urlencode($gexf).'&query='.urlencode('["'.$res[$i].'"]').'&type='.urlencode($_GET["type"]).'&id='.$row2['id'].' \')">'.$res[$i]."</a>, ";
// // }
// // }
// // }
// // $news.='<br/><b><font color="#CF5300">Top '.$top_displayed.' Impact related terms: </font></b>'.$res_string.'<br/>';
// // }
// // }
// // }
// // display the most occuring terms when only one is selected.
// //elseif (count($elems)==1) {// on affiche les voisins
// // $terms_array=array();
// // $id_sql='SELECT ISIterms.id FROM ISIterms where ISIterms.data="'.$elems[0].'" group by id';
// // foreach ($base->query($id_sql) as $row_id) {
// // $sql2='SELECT ISIterms.data FROM ISIterms where ISIterms.id='.$row_id['id'];
// // foreach ($base->query($sql2) as $row_terms) {
// // if ($terms_array[$row_terms['data']]>0){
// // $terms_array[$row_terms['data']]=$terms_array[$row_terms['data']]+1;
// // }else{
// // $terms_array[$row_terms['data']]=1;
// // }
// // }
// // }
// // natsort($terms_array);
// // $terms_list=array_keys(array_slice($terms_array,-11,-1));
// // foreach ($terms_list as $first_term) {
// // $related_terms.=$first_term.', ';
// // }
// // $news.='<br/><b><font color="#CF5300">Related terms: </font></b>'.$related_terms.'<br/>';
// //}
// calculate binomial coefficient
function binomial_coeff($n, $k) {
$j = $res = 1;
if($k < 0 || $k > $n)
return 0;
if(($n - $k) < $k)
$k = $n - $k;
while($j <= $k) {
$res *= $n--;
$res /= $j++;
}
return $res;
}
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="'.$twjs.'img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="'.$twjs.'img/stargrey.gif" border="0">';
}
return $star_image;
}
if($max_item_displayed>$related) $max_item_displayed=$related;
echo $news.'<br/><h4><font color="#0000FF"> Full text of top '.$max_item_displayed.'/'.$related.' related publications:</font></h4>'.$output;
//pt - 301 ; 15.30
?>
<?php
include('parameters_details.php');
$db = $gexf_db[$gexf];
$base = new PDO("sqlite:../" .$db);
$query = str_replace( '__and__', '&', $_GET["query"] );
$terms_of_query = json_decode($query);
// echo "mainpath: ".$mainpath."<br>";
// echo "thedb: ".$db."<br>";
// echo "thequery: ".var_dump($terms_of_query);
echo '
<html>
<head>
<meta charset="utf-8" />
<title>Document details</title>
<link rel="stylesheet" href="js/jquery-ui.css" />
<script src="js/jquery-1.9.1.js"></script>
<script src="js/jquery-ui.js"></script>
<script>
$(function() {
$( "#tabs" ).tabs({
collapsible: true
});
});
</script>
</head>
<body>
<div id="tabs">
<ul>
<li><a href="#tabs-1">Selected Document</a></li>
<li><a href="full_doc_list.php?'.'gexf='.urlencode($gexf).'&query='.urlencode($_GET["query"]).'&type='.urlencode($_GET["type"]).'">Full list</a></li>';
echo '</ul>';
echo '<div id="tabs-1">';
$id=$_GET["id"];
// //$elems = json_decode($query);
// $sql = 'SELECT data FROM ISIkeyword WHERE id='.$id;
// foreach ($base->query($sql) as $row) {
// $country=$CC[strtoupper($row['data'])];
// }
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$output.='<h2>'.$row['data'].'</h2>';
$find.="<br/><a href=http://google.com/webhp?q=".urlencode('"'.$row['data'].'"')." target='blank'>[ Search on the web ] </a>";
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<i>'.($row['data']).'</i>, ';
}
$output = rtrim($output, ", ");
// // // get the company
// // $sql = 'SELECT data FROM ISIC1_1 WHERE id='.$id;
// // foreach ($base->query($sql) as $row) {
// //$output.=' - '.substr($row['data'],3,strlen( $row['data'])).' ';
// //}
// get the date
$sql = 'SELECT data FROM ISIpubdate WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=' ('.$row['data'].') ';
}
// // get the country
// $sql = 'SELECT data FROM ISIkeyword WHERE id='.$id;
// foreach ($base->query($sql) as $row) {
// $country=$CC[strtoupper($row['data'])];
// $output.=strtoupper($country).'<br/> ';
// }
// // get the date
if(strpos($_GET["index"],'terms') ) $sql = 'SELECT data FROM '.$_GET["index"].' WHERE id='.$id;
else $sql = 'SELECT data FROM ISItermsListV1 WHERE id='.$id;
$output.='<br/><b>Keywords: </b>';
$terms=array();
foreach ($base->query($sql) as $row) {
$terms[]=$row['data'];
}
natsort($terms);
$terms=array_unique($terms); // liste des termes de l'article
$keywords='';
foreach ($terms as $key => $value) {
$keywords.=$value.', ';
}
foreach ($terms_of_query as $key => $value) {
$keywords=str_replace($value,'<font color="green"><b> '.$value.'</b></font>',$keywords);
}
foreach (array_diff($terms,$terms_of_query) as $key => $value) {
$keywords=str_ireplace($value,'<font color="#800000"> '.$value.'</font>',$keywords);
}
$output.='<p align="justify">'.$keywords.'</p>';
// // get the website
$sql = 'SELECT data FROM ISISO WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<b>Journal: </b>'.$row['data'].'<br/> ';
}
$sql = 'SELECT data FROM ISIABSTRACT WHERE id='.$id;
// echo $output."<br>";
$abs="";
foreach ($base->query($sql) as $row) {
$abs.=". ".$row['data'];
}
$abs=str_replace('ISSUES:' ,'<br/><br/><b>Issues:</b>',$abs);
$abs=str_replace('INTENDED IMPACT:' ,'<br/><br/><b>Intended impact:</b>',$abs);
$abs=str_replace('IMPACT:' ,'<br/><br/><b>Impact:</b>',$abs);
$abs=str_replace('NOVELTY:' ,'<br/><br/><b>Novelty:</b>',$abs);
$abs=str_replace('BOLD INNOVATION:' ,'<br/><br/><b>Bold innovation:</b>',$abs);
$abs=str_replace('SOCIAL PROBLEM:' ,'<br/><br/><b>Social problem:</b>',$abs);
// solving encoding pb
$abs=str_replace('―', ' ', $abs);
$abs=str_replace('‟‟', ' ', $abs);
$abs=str_replace('„‟', ' ', $abs);
$abs=str_replace('_x000D_', ' ', $abs);
$abs=str_replace('•', ' ', $abs);
$abs=str_replace('’', '\'', $abs);
foreach ($terms_of_query as $key => $value) {
$abs=str_ireplace($value,'<font color="green"><b> '.$value.'</b></font>',$abs);
}
foreach (array_diff($terms,$terms_of_query) as $key => $value) {
$abs=str_ireplace($value,'<font color="#800000"> '.$value.'</font>',$abs);
}
$output.='<br/><p align="justify"><b>Abstract : </b><i>'.$abs.' </i></p>';
$output.="<br>";
echo $output.$find;
echo '</div>';
//echo '<div id="tabs-2">
// <p><strong>Click this tab again to close the content pane.</strong></p>
// <p>Morbi tincidunt, dui sit amet facilisis feugiat, odio metus gravida ante, ut pharetra massa metus id nunc. Duis scelerisque molestie turpis. Sed fringilla, massa eget luctus malesuada, metus eros molestie lectus, ut tempus eros massa ut dolor. Aenean aliquet fringilla sem. Suspendisse sed ligula in ligula suscipit aliquam. Praesent in eros vestibulum mi adipiscing adipiscing. Morbi facilisis. Curabitur ornare consequat nunc. Aenean vel metus. Ut posuere viverra nulla. Aliquam erat volutpat. Pellentesque convallis. Maecenas feugiat, tellus pellentesque pretium posuere, felis lorem euismod felis, eu ornare leo nisi vel felis. Mauris consectetur tortor et purus.</p>
// </div>';
echo '</div>';
function pt($string){
// juste pour afficher avec retour à la ligne
echo $string."<br/>";
}
function pta($array){
print_r($array);
echo '<br/>';
}
?>
<?php
include('parameters_details.php');
$db = $gexf_db[$gexf];
$base = new PDO("sqlite:../" ."data/terrorism/data.db");
$query = str_replace( '__and__', '&', $_GET["query"] );
$terms_of_query = json_decode($query);
// echo "mainpath: ".$mainpath."<br>";
// echo "thedb: ".$db."<br>";
// echo "thequery: ".var_dump($terms_of_query);
echo '
<html>
<head>
<meta charset="utf-8" />
<title>Document details</title>
<link rel="stylesheet" href="js/jquery-ui.css" />
<script src="js/jquery-1.9.1.js"></script>
<script src="js/jquery-ui.js"></script>
<script>
$(function() {
$( "#tabs" ).tabs({
collapsible: true
});
});
</script>
</head>
<body>
<div id="tabs">
<ul>
<li><a href="#tabs-1">Selected Document</a></li>
<li><a href="full_doc_list2.php?'.'gexf='.urlencode($gexf).'&query='.urlencode($_GET["query"]).'&type='.urlencode($_GET["type"]).'">Full list</a></li>';
echo '</ul>';
echo '<div id="tabs-1">';
$id=$_GET["id"];
// //$elems = json_decode($query);
// $sql = 'SELECT data FROM ISIkeyword WHERE id='.$id;
// foreach ($base->query($sql) as $row) {
// $country=$CC[strtoupper($row['data'])];
// }
$sql = 'SELECT data FROM ID WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$output.='<h2>Project Identification: '.$row['data'].'</h2>';
}
$sql = 'SELECT data FROM TI WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$output.='<h2>'.$row['data'].'</h2>';
$find.="<br/><a href=http://google.com/webhp?q=".urlencode('"'.$row['data'].'"')." target='blank'>[ Search on the web ] </a>";
}
// get the authors
$sql = 'SELECT data FROM PI WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<i>'.($row['data']).'</i>, ';
}
$output = rtrim($output, ", ");
// // // get the company
// // $sql = 'SELECT data FROM ISIC1_1 WHERE id='.$id;
// // foreach ($base->query($sql) as $row) {
// //$output.=' - '.substr($row['data'],3,strlen( $row['data'])).' ';
// //}
$output.=' (2014) ';
// // get the country
// $sql = 'SELECT data FROM ISIkeyword WHERE id='.$id;
// foreach ($base->query($sql) as $row) {
// $country=$CC[strtoupper($row['data'])];
// $output.=strtoupper($country).'<br/> ';
// }
// // get the date
// $sql = 'SELECT data FROM '."ISItermsBigWL".' WHERE id='.$id;
$sql = 'SELECT data FROM ISItermsfirstindexing WHERE id='.$id;
$output.='<br/><b>Keywords: </b>';
$terms=array();
foreach ($base->query($sql) as $row) {
$terms[]=$row['data'];
}
natsort($terms);
$terms=array_unique($terms); // liste des termes de l'article
$keywords='';
foreach ($terms as $key => $value) {
$keywords.=$value.', ';
}
foreach ($terms_of_query as $key => $value) {
$keywords=str_replace($value,'<font color="green"><b> '.$value.'</b></font>',$keywords);
}
foreach (array_diff($terms,$terms_of_query) as $key => $value) {
$keywords=str_ireplace($value,'<font color="#800000"> '.$value.'</font>',$keywords);
}
$output.='<p align="justify">'.$keywords.'</p>';
// // get the website
$sql = 'SELECT data FROM AG1 WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<b>Agency: </b>'.$row['data'].'<br/> ';
}
$sql = 'SELECT data FROM ABS WHERE id='.$id;
// echo $output."<br>";
$abs="";
foreach ($base->query($sql) as $row) {
$abs.=". ".$row['data'];
}
$abs=str_replace('ISSUES:' ,'<br/><br/><b>Issues:</b>',$abs);
$abs=str_replace('INTENDED IMPACT:' ,'<br/><br/><b>Intended impact:</b>',$abs);
$abs=str_replace('IMPACT:' ,'<br/><br/><b>Impact:</b>',$abs);
$abs=str_replace('NOVELTY:' ,'<br/><br/><b>Novelty:</b>',$abs);
$abs=str_replace('BOLD INNOVATION:' ,'<br/><br/><b>Bold innovation:</b>',$abs);
$abs=str_replace('SOCIAL PROBLEM:' ,'<br/><br/><b>Social problem:</b>',$abs);
// solving encoding pb
$abs=str_replace('―', ' ', $abs);
$abs=str_replace('‟‟', ' ', $abs);
$abs=str_replace('„‟', ' ', $abs);
$abs=str_replace('_x000D_', ' ', $abs);
$abs=str_replace('•', ' ', $abs);
$abs=str_replace('’', '\'', $abs);
foreach ($terms_of_query as $key => $value) {
$abs=str_ireplace($value,'<font color="green"><b> '.$value.'</b></font>',$abs);
}
foreach (array_diff($terms,$terms_of_query) as $key => $value) {
$abs=str_ireplace($value,'<font color="#800000"> '.$value.'</font>',$abs);
}
$output.='<br/><p align="justify"><b>Abstract : </b><i>'.$abs.' </i></p>';
$output.="<br>";
echo $output.$find;
echo '</div>';
//echo '<div id="tabs-2">
// <p><strong>Click this tab again to close the content pane.</strong></p>
// <p>Morbi tincidunt, dui sit amet facilisis feugiat, odio metus gravida ante, ut pharetra massa metus id nunc. Duis scelerisque molestie turpis. Sed fringilla, massa eget luctus malesuada, metus eros molestie lectus, ut tempus eros massa ut dolor. Aenean aliquet fringilla sem. Suspendisse sed ligula in ligula suscipit aliquam. Praesent in eros vestibulum mi adipiscing adipiscing. Morbi facilisis. Curabitur ornare consequat nunc. Aenean vel metus. Ut posuere viverra nulla. Aliquam erat volutpat. Pellentesque convallis. Maecenas feugiat, tellus pellentesque pretium posuere, felis lorem euismod felis, eu ornare leo nisi vel felis. Mauris consectetur tortor et purus.</p>
// </div>';
echo '</div>';
function pt($string){
// juste pour afficher avec retour à la ligne
echo $string."<br/>";
}
function pta($array){
print_r($array);
echo '<br/>';
}
?>
<?php
$db= $_GET["db"];//I receive the specific database as string!
$terms_of_query=json_decode($_GET["query"]);
include('parameters_details.php');
$base = new PDO("sqlite:" .$mainpath.$db);
$query=$_GET["query"];
$gexf=$_GET["gexf"];
$max_tag_could_size=15;
$output = "<ul>"; // string sent to the javascript for display
$type = $_GET["type"];
$sql='SELECT id from favorites';
$wos_ids=array(); // favorite list
$num_favorite=0;
$count=0;
foreach ($base->query($sql) as $row){
$wos_ids[$row['id']] = 1;
$num_favorite+=1;
}
$favorite_keywords=array();
foreach ($wos_ids as $id => $score) {
if ($count<1000){
// retrieve publication year
$sql = 'SELECT data FROM ISIpubdate WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$pubdate=$row['data'];
}
$count+=1;
$output.="<li >";
$sql = 'SELECT data FROM ISItermsListV1 WHERE id='.$id;
foreach ($base->query($sql) as $row) {
if (array_key_exists($row['data'], $favorite_keywords)){
$favorite_keywords[$row['data']]=$favorite_keywords[$row['data']]+1;
}else{
$favorite_keywords[$row['data']]=1;
}
}
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<a href="default_doc_details.php?db='.urlencode($db).'&type='.urlencode($_GET["type"]).'&query='.urlencode($query).'&id='.$id.'">'.$row['data']." </a> ";
//this should be the command:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($datadb).'&id='.$id.' \')">'.$row['data']." </a> ";
//the old one:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?id='.$id.' \')">'.$row['data']." </a> ";
$external_link="<a href=http://scholar.google.com/scholar?q=".urlencode('"'.$row['data'].'"')." target=blank>".' <img width=20px src="img/gs.png"></a>';
//$output.='<a href="JavaScript:newPopup(''php/doc_details.php?id='.$id.''')"> Link</a>';
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=strtoupper($row['data']).', ';
}
if($project_folder!='nci'){
$output.='('.$pubdate.') ';
}else {
$output.='(2013) ';
}
// get the country
$sql = 'SELECT data FROM ISIkeyword WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$country=$CC[strtoupper($row['data'])];
$output.=strtoupper($country).' ';
}
//<a href="JavaScript:newPopup('http://www.quackit.com/html/html_help.cfm');">Open a popup window</a>'
$output.=$external_link."</li><br>";
}else{
continue;
}
}
arsort($favorite_keywords);
$tag_coud_size=0;
$tag_could='';
foreach ($favorite_keywords as $key => $value) {
if ($tag_coud_size<$max_tag_could_size){
$tag_coud_size+=1;
$tag_could.='<font size="'.(3+log($value)).'">'.$key.', </font>';
}else{
continue;
} # code...
}
$output= '<h3>'.$num_favorite.' favorite items </h3>'.$tag_could.'<br/>'.$output;
echo $output;
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="img/stargrey.gif" border="0">';
}
return $star_image;
}
?>
<?php
include('parameters_details.php');
$db = $gexf_db[$gexf];
$base = new PDO("sqlite:../" .$db);
$output = "<ul>"; // string sent to the javascript for display
#http://localhost/branch_ademe/php/test.php?type=social&query=[%22marwah,%20m%22]
$type = $_GET["type"];
$query = str_replace( '__and__', '&', $_GET["query"] );
$terms_of_query=json_decode($_GET["query"]);
$elems = json_decode($query);
// nombre d'item dans les tables
$sql='SELECT COUNT(*) FROM ISIABSTRACT';
foreach ($base->query($sql) as $row) {
$table_size=$row['COUNT(*)'];
}
$table = "";
$column = "";
$id="";
$twjs="API_CNRS/"; // submod path of TinaWebJS
if($type=="social"){
$table = "ISIAUTHOR";
$column = "data";
$id = "id";
$restriction='';
$factor=10;// factor for normalisation of stars
}
if($type=="semantic"){
$table = "ISItermsListV1";
$column = "data";
$id = "id";
$restriction='';
$factor=10;
}
$sql = 'SELECT count(*),'.$id.'
FROM '.$table.' where (';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
#$querynotparsed=$sql;#####
$sql = substr($sql, 0, -3);
$sql = str_replace( ' & ', '" OR '.$column.'="', $sql );
$sql.=')'.$restriction.'
GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000';
#$queryparsed=$sql;#####
$wos_ids = array();
$sum=0;
//The final query!
// array of all relevant documents with score
foreach ($base->query($sql) as $row) {
// on pondère le score par le nombre de termes mentionnés par l'article
//$num_rows = $result->numRows();
$wos_ids[$row[$id]] = $row["count(*)"];
$sum = $row["count(*)"] +$sum;
}
//arsort($wos_ids);
$number_doc=ceil(count($wos_ids)/3);
$count=0;
foreach ($wos_ids as $id => $score) {
if ($count<1000){
// retrieve publication year
$sql = 'SELECT data FROM ISIpubdate WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$pubdate=$row['data'];
}
// to filter under some conditions
$to_display=true;
if ($to_display){
$count+=1;
$output.="<li title='".$score."'>";
$output.=imagestar($score,$factor,$twjs).' ';
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id." group by data";
foreach ($base->query($sql) as $row) {
$output.='<a href="default_doc_details.php?gexf='.urlencode($gexf).'&type='.urlencode($_GET["type"]).'&query='.urlencode($query).'&id='.$id.'">'.$row['data']." </a> ";
//this should be the command:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($datadb).'&id='.$id.' \')">'.$row['data']." </a> ";
//the old one:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?id='.$id.' \')">'.$row['data']." </a> ";
$external_link="<a href=http://scholar.google.com/scholar?q=".urlencode('"'.$row['data'].'"')." target=blank>".' <img width=20px src="img/gs.png"></a>';
//$output.='<a href="JavaScript:newPopup(''php/doc_details.php?id='.$id.''')"> Link</a>';
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=strtoupper($row['data']).', ';
}
//<a href="JavaScript:newPopup('http://www.quackit.com/html/html_help.cfm');">Open a popup window</a>'
$output.=$external_link."</li><br>";
}
}else{
continue;
}
}
$output= '<h3>'.$count.' items related to: '.implode(' OR ', $elems).'</h3>'.$output;
echo $output;
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="img/stargrey.gif" border="0">';
}
return $star_image;
}
?>
<?php
include('parameters_details.php');
$db = $gexf_db[$gexf];
$base = new PDO("sqlite:../" ."data/terrorism/data.db");
echo "sqlite:../" ."data/terrorism/data.db";
$output = "<ul>"; // string sent to the javascript for display
#http://localhost/branch_ademe/php/test.php?type=social&query=[%22marwah,%20m%22]
$type = $_GET["type"];
$query = str_replace( '__and__', '&', $_GET["query"] );
$terms_of_query=json_decode($_GET["query"]);
$elems = json_decode($query);
// nombre d'item dans les tables
$sql='SELECT COUNT(*) FROM ISIABSTRACT';
foreach ($base->query($sql) as $row) {
$table_size=$row['COUNT(*)'];
}
$table = "";
$column = "";
$id="";
$twjs="pasteurapi/"; // submod path of TinaWebJS
if($type=="social"){
$table = "ISIAUTHOR";
$column = "data";
$id = "id";
$restriction='';
$factor=10;// factor for normalisation of stars
}
if($type=="semantic"){
$table = "ISItermsListV1";
$column = "data";
$id = "id";
$restriction='';
$factor=10;
}
$sql = 'SELECT count(*),'.$id.'
FROM '.$table.' where (';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
#$querynotparsed=$sql;#####
$sql = substr($sql, 0, -3);
$sql = str_replace( ' & ', '" OR '.$column.'="', $sql );
$sql.=')'.$restriction.'
GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000';
#$queryparsed=$sql;#####
$wos_ids = array();
$sum=0;
//The final query!
// array of all relevant documents with score
foreach ($base->query($sql) as $row) {
// on pondère le score par le nombre de termes mentionnés par l'article
//$num_rows = $result->numRows();
$wos_ids[$row[$id]] = $row["count(*)"];
$sum = $row["count(*)"] +$sum;
}
//arsort($wos_ids);
$number_doc=ceil(count($wos_ids)/3);
$count=0;
foreach ($wos_ids as $id => $score) {
if ($count<1000){
// retrieve publication year
$sql = 'SELECT data FROM ISIpubdate WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$pubdate="2014";
}
// to filter under some conditions
$to_display=true;
if ($to_display){
$count+=1;
$output.="<li title='".$score."'>";
$output.=imagestar($score,$factor,$twjs).' ';
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id." group by data";
foreach ($base->query($sql) as $row) {
$output.='<a href="default_doc_details2.php?gexf='.urlencode($gexf).'&type='.urlencode($_GET["type"]).'&query='.urlencode($query).'&id='.$id.'">'.$row['data']." </a> ";
//this should be the command:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($datadb).'&id='.$id.' \')">'.$row['data']." </a> ";
//the old one:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?id='.$id.' \')">'.$row['data']." </a> ";
$external_link="<a href=http://scholar.google.com/scholar?q=".urlencode('"'.$row['data'].'"')." target=blank>".' <img width=20px src="img/gs.png"></a>';
//$output.='<a href="JavaScript:newPopup(''php/doc_details.php?id='.$id.''')"> Link</a>';
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=strtoupper($row['data']).', ';
}
//<a href="JavaScript:newPopup('http://www.quackit.com/html/html_help.cfm');">Open a popup window</a>'
$output.=$external_link."</li><br>";
}
}else{
continue;
}
}
$output= '<h3>'.$count.' items related to: '.implode(' OR ', $elems).'</h3>'.$output;
echo $output;
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="img/stargrey.gif" border="0">';
}
return $star_image;
}
?>
<?php
include('parameters_details.php');
$db= $_GET["db"];//I receive the specific database as string!
$query=$_GET["query"];
$gexf=$_GET["gexf"];
$base = new PDO("sqlite:" .$mainpath.$db);
$temp=explode('/',$db);
$project_folder=$temp[1];
$corpora=$temp[count($temp)-2];
$corporadb = new PDO("sqlite:" .$mainpath.'data/'.$corpora.'/'.$corpora.'.sqlite'); //data base with complementary data
$output = "<ul>"; // string sent to the javascript for display
#http://localhost/branch_ademe/php/test.php?type=social&query=[%22marwah,%20m%22]
$type = $_GET["type"];
$query = str_replace( '__and__', '&', $_GET["query"] );
$elems = json_decode($query);
// nombre d'item dans les tables
$sql='SELECT COUNT(*) FROM ISIABSTRACT';
foreach ($base->query($sql) as $row) {
$table_size=$row['COUNT(*)'];
}
///// Specific to rock //////////
// Other restrictions
// extracting the project folder and the year
if (strpos($gexf,'2013')>0){
$year='2013';
$year_filter=true;
}elseif (strpos($gexf,'2012')>0){
$year='2012';
$year_filter=true;
}else{
$year_filter=false;
}
// identification d'une année pour echoing
if($project_folder=='nci'){
$year_filter=true;
}
$table = "";
$column = "";
$id="";
$twjs="tinawebJS/"; // submod path of TinaWebJS
if($type=="social"){
$table = "ISIAUTHOR";
$column = "data";
$id = "id";
$restriction='';
$factor=10;// factor for normalisation of stars
}
if($type=="semantic"){
$table = "ISItermsListV1";
$column = "data";
$id = "id";
$restriction='';
$factor=10;
}
// identification d'une année pour echoing
if($project_folder=='nci'){
$restriction.=" AND ISIpubdate='2013'";
}
$sql = 'SELECT sum(tfidf),id
FROM tfidf where (';
foreach($elems as $elem){
$sql.=' term="'.$elem.'" OR ';
}
#$querynotparsed=$sql;#####
$sql = substr($sql, 0, -3);
$sql = str_replace( ' & ', '" OR term="', $sql );
$sql.=')'.//$restriction.
'GROUP BY '.$id.'
ORDER BY sum(tfidf) DESC
LIMIT 1000';
//echo $sql;
#$queryparsed=$sql;#####
$wos_ids = array();
$sum=0;
//echo $sql;//The final query!
// array of all relevant documents with score
$count=0;
foreach ($corporadb ->query($sql) as $row) {
//if ($count<4*$max_item_displayed){
$wos_ids[$row[$id]] = $row['sum(tfidf)'];//$row["count(*)"];
$sum = $row["count(*)"] +$sum;
//}else{
// continue;
//}
}
//arsort($wos_ids);
$number_doc=ceil(count($wos_ids)/3);
$count=0;
foreach ($wos_ids as $id => $score) {
if ($count<1000){
// retrieve publication year
$sql = 'SELECT data FROM ISIpubdate WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$pubdate=$row['data'];
}
// to filter under some conditions
$to_display=true;
if ($project_folder=='echoing'){
if ($year_filter){
if ($pubdate!=$year){
$to_display=false;
}
}
}elseif($project_folder=='nci'){
if ($year_filter){
if ($pubdate!='2013'){
$to_display=false;
}
}
}
if ($to_display){
$count+=1;
$output.="<li title='".$score."'>";
$output.=imagestar($score,$factor,$twjs).' ';
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.='<a href="default_doc_details.php?db='.urlencode($db).'&type='.urlencode($_GET["type"]).'&query='.urlencode($query).'&id='.$id.'">'.$row['data']." </a> ";
//this should be the command:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?db='.urlencode($datadb).'&id='.$id.' \')">'.$row['data']." </a> ";
//the old one:
//$output.='<a href="JavaScript:newPopup(\''.$twjs.'php/default_doc_details.php?id='.$id.' \')">'.$row['data']." </a> ";
$external_link="<a href=http://scholar.google.com/scholar?q=".urlencode('"'.$row['data'].'"')." target=blank>".' <img width=20px src="img/gs.png"></a>';
//$output.='<a href="JavaScript:newPopup(''php/doc_details.php?id='.$id.''')"> Link</a>';
}
// get the authors
$sql = 'SELECT data FROM ISIAUTHOR WHERE id='.$id;
foreach ($base->query($sql) as $row) {
$output.=strtoupper($row['data']).', ';
}
if($project_folder!='nci'){
$output.='('.$pubdate.') ';
}else {
$output.='(2013) ';
}
//<a href="JavaScript:newPopup('http://www.quackit.com/html/html_help.cfm');">Open a popup window</a>'
$output.=$external_link."</li><br>";
}
}else{
continue;
}
}
$output= '<h3>'.$count.' items related to: '.implode(' OR ', $elems).'</h3>'.$output;
echo $output;
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="img/stargrey.gif" border="0">';
}
return $star_image;
}
?>
<?php
// manage the dynamical additional information in the left panel.
// ini_set('display_errors',1);
// ini_set('display_startup_errors',1);
// error_reporting(-1);
include('parameters_details.php');
$max_item_displayed=6;
$base = new PDO("sqlite:../" .$graphdb);
include('default_div.php');
/*
* This function gets the first db name in the data folder
* IT'S NOT SCALABLE! (If you want to use several db's)
*/
function getDB ($directory) {
//$results = array();
$result = "";
$handler = opendir($directory);
while ($file = readdir($handler)) {
if ($file != "." && $file != ".."
&&
((strpos($file,'.db~'))==false && (strpos($file,'.db'))==true )
||
((strpos($file,'.sqlite~'))==false && (strpos($file,'.sqlite'))==true)
) {
//$results[] = $file;
$result = $file;
break;
}
}
closedir($handler);
//return $results;
return $result;
}
?>
<?php
// manage the dynamical additional information in the left panel.
// include('parameters_details.php');
$gexf= str_replace('"','',$_GET["gexf"]);
$max_item_displayed=6;
$type = $_GET["type"];
$TITLE="ISITITLE";
$query = str_replace( '__and__', '&', $_GET["query"] );
$elems = json_decode($query);
$table = "";
$column = "";
$id="";
$twjs="API_CNRS/"; // submod path of TinaWebJS
if($type=="semantic"){
$table = "ISItermsListV1";
$column = "data";
$id = "id";
$restriction='';
$factor=10;
}
$restriction='';
$factor=10;
$sql="";
if (count($elems)==1){// un seul mot est sélectionné, on compte les mots multiples
$sql = 'SELECT count(*),'.$id.'
FROM '.$table.' where (';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
#$querynotparsed=$sql;#####
$sql = substr($sql, 0, -3);
$sql = str_replace( ' & ', '" OR '.$column.'="', $sql );
$sql.=')'.$restriction.'
GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000';
}else{// on compte une seule fois un mot dans un article
$factor=ceil(count($elems)/5); //les scores sont moins haut
$sql='';
foreach($elems as $elem){
$sql.=' '.$column.'="'.$elem.'" OR ';
}
$sql=substr($sql, 0, -3);
$sql='SELECT count(*),id,data FROM (SELECT *
FROM '.$table.' where ('.$sql.')'.$restriction.'
group by id,data) GROUP BY '.$id.'
ORDER BY count('.$id.') DESC
LIMIT 1000 COLLATE NOCASE';
}
// echo $sql."<br>";
$base = new PDO("sqlite:../data/terrorism/data.db");
$wos_ids = array();
$sum=0;
//The final query!
// array of all relevant documents with score
foreach ($base->query($sql) as $row) {
// on pondère le score par le nombre de termes mentionnés par l'article
//$num_rows = $result->numRows();
$wos_ids[$row[$id]] = $row["count(*)"];
$sum = $row["count(*)"] +$sum;
}
// /// nombre de document associés $related
$total_count=0;
$count_max=500;
$number_doc=count($wos_ids);
$count=0;
$all_terms_from_selected_projects=array();// list of terms for the top 6 project selected
// to filter under some conditions
$to_display=true;
$count=0;
foreach ($wos_ids as $id => $score) {
if ($total_count<$count_max) {
// retrieve publication year
if ($to_display){
$total_count+=1;
if ($count<=$max_item_displayed){
$count+=1;
$sql = 'SELECT data FROM ISITITLE WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$external_link="<a href=http://google.com/webhp?#q=".urlencode('"'.utf8_decode($row['data']).'"')." target=blank>".' <img width=15px src="'.$twjs.'img/google.png"></a>';
$output.="<li title='".$score."'>";
$output.=$external_link.imagestar($score,$factor,$twjs).' ';
$output.='<a href="JavaScript:newPopup(\''.$twjs.'default_doc_details2.php?gexf='.urlencode($gexf).'&query='.urlencode($query).'&type='.urlencode($_GET["type"]).'&id='.$id.' \')">'.htmlentities($row['data'], ENT_QUOTES, "UTF-8")." </a> ";
// $output.='<a>'.htmlentities($row['data'], ENT_QUOTES, "UTF-8")." </a> ";
}
$sql = 'SELECT data FROM ISIDOI WHERE id='.$id.' group by data';
foreach ($base->query($sql) as $row) {
$output.=$external_link.imagestar($score,$factor,$twjs).' ';
$output.='<a href="JavaScript:newPopup(\''.$twjs.'default_doc_details2.php?gexf='.urlencode($gexf).'&query='.urlencode($query).'&type='.urlencode($_GET["type"]).'&id='.$id.' \')">'.htmlentities($row['data'], ENT_QUOTES, "UTF-8")." </a> ";
} // get the authors
$sql2 = 'SELECT data FROM ISIAUTHOR WHERE id='.$id. ' group by data';
foreach ($base->query($sql2) as $row2) {
$output.=(str_replace("\r", "", $row2['data'])).', ';
}
$output = rtrim($output, ", ");
$output.="</li><br>";
}
}
} else{
continue;
}
}
if ($total_count<$count_max){
$related .= $total_count;
}else{
$related .= ">".$count_max;
}
$output .= "</ul>"; #####
// echo $output."<br>";
if($max_item_displayed>$related) $max_item_displayed=$related;
echo $news.'<br/><h4><font color="#0000FF"> Full text of top '.$max_item_displayed.'/'.$related.' related grant proposals:</font></h4>'.$output;
//pt - 301 ; 15.30
/*
* This function gets the first db name in the data folder
* IT'S NOT SCALABLE! (If you want to use several db's)
*/
function getDB ($directory) {
//$results = array();
$result = "";
$handler = opendir($directory);
while ($file = readdir($handler)) {
if ($file != "." && $file != ".."
&&
((strpos($file,'.db~'))==false && (strpos($file,'.db'))==true )
||
((strpos($file,'.sqlite~'))==false && (strpos($file,'.sqlite'))==true)
) {
//$results[] = $file;
$result = $file;
break;
}
}
closedir($handler);
//return $results;
return $result;
}
function imagestar($score,$factor,$twjs) {
// produit le html des images de score
$star_image = '';
if ($score > .5) {
$star_image = '';
for ($s = 0; $s < min(5,$score/$factor); $s++) {
$star_image.='<img src="'.$twjs.'img/star.gif" border="0" >';
}
} else {
$star_image.='<img src="'.$twjs.'img/stargrey.gif" border="0">';
}
return $star_image;
}
?>
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
<?php
header ("Content-Type:application/json");
//$string = getcwd();
//$string = str_replace("/php","",$string);
$string=dirname(dirname(getcwd())); // ProjectExplorer folder name: /var/www/ademe
//$files = getDirectoryList($string."/data");
include("DirectoryScanner.php");
$projectFolderPat = dirname(dirname(getcwd())) . "/";
$instance = new scanTree($projectFolderPat);
$instance->getDirectoryTree("data");
$gexfs=$instance->gexf_folder;
$files=array();
foreach($gexfs as $key => $value){
array_push($files,$key);
}
$filesSorted=array();
foreach($files as $file){
array_push($filesSorted,$file);
}
sort($filesSorted);
echo json_encode($filesSorted);
function getDirectoryList ($directory) {
$results = array();
$handler = opendir($directory);
while ($file = readdir($handler)) {
if ($file != "." && $file != ".." &&
(strpos($file,'.gexf~'))==false &&
(strpos($file,'.gexf'))==true) {
$results[] = $file;
}
}
closedir($handler);
return $results;
}
?>
<?php
$gexf_db = array();
$gexf_db["data/medq1/20141208_MED_01_bi.gexf"] = "data/medq1/01_medline-query1.db";
$gexf_db["data/medq2/20141128_MED_02_bi.gexf"] = "data/medq2/02_medline-query2.db";
$gexf_db["data/medq2/20141128_MED_03_bi.gexf"] = "data/medq2/02_medline-query2.db";
$gexf_db["data/medq2/20141208_MED_Author_name-ISItermsjulien_index.gexf"] = "data/medq2/02_medline-query2.db";
$gexf_db["data/20141128_GPs_03_bi.gexf"] = "data/00_grantproposals.db";
$gexf_db["data/20141215_GPs_04.gexf"] = "data/00_grantproposals.db";
# new stuff
$gexf_db["data/terrorism/terrorism_mono.gexf"] = "data/terrorism/data.db";
$gexf_db["data/terrorism/terrorism_bi.gexf"] = "data/terrorism/data.db";
# new stuff2
$gexf_db["data/ClimateChange/hnetwork-2014_2015hhn-wosclimatechange2014_2015top509-ISItermsListV3bis-ISItermsListV3bis-distributionalcooc-99999-oT0.36-20-louTrue.gexf"] = "data/ClimateChange/wosclimatechange-61715-1-wosclimatechange-db(2).db";
$gexf_db["data/ClimateChange/ClimateChangeV1.gexf"] = "data/ClimateChange/wosclimatechange-61715-1-wosclimatechange-db(2).db";
$gexf_db["data/ClimateChange/hnetwork-2014_2015hn-wosclimatechange2014_2015top509-ISItermsListV3bis-ISItermsListV3bis-distributionalcooc-99999-oT0.36-20-louTrue.gexf"] = "data/ClimateChange/wosclimatechange-61715-1-wosclimatechange-db(2).db";
$gexf= str_replace('"','',$_GET["gexf"]);
$mainpath=dirname(getcwd())."/";
$graphdb = $gexf_db[$gexf];
?>
<?php
echo '<meta http-equiv="Content-type" content="text/html; charset=UTF-8"/>';
// compute the tfidf score for each terms for each document for cortext like databases and store them in a specific table
//include('parameters_details.php');
$db = new PDO("sqlite:graph.db");
$database_name='echoing.sqlite';
$project_base = new PDO("sqlite:" .$database_name);
// Table creation
// efface la base existante
$project_base->exec("DROP TABLE IF EXIST tfidf");
pt("creation of tfidf table");
//on crée un table pour les infos de clusters
$project_base->exec("CREATE TABLE tfidf (id NUMERIC,term TEXT,tfidf NUMERIC)");
//generating number of mention of terms in the corpora
$terms_freq=array();
pt('processing terms frequency');
$sql='SELECT count(*),data FROM ISItermsListV1 group by data';
foreach ($db->query($sql) as $term) {
$terms_freq[$term['data']]=$term['count(*)'];
}
pt('processing number of doc');
// nombre d'iterator_apply(iterator, function)em dans les tables
$sql='SELECT COUNT(*) FROM ISIABSTRACT';
foreach ($db->query($sql) as $row) {
$table_size=$row['COUNT(*)'];
}
pt($table_size.' documents in database');
// select all the doc
$sql='SELECT * FROM ISIABSTRACT';
foreach ($db->query($sql) as $doc) {
$id=$doc['id'];
pt($id);
//select all the terms of that document with their occurrences
$sql2="SELECT count(*),data FROM ISItermsListV1 where id='".$id."' group by data";
// for each term we compute the tfidf
foreach ($db->query($sql2) as $term_data) {
$term=$term_data['data'];
$term_occ_in_doc=$term_data['count(*)'];
$terms_tfidf=log(1+$term_occ_in_doc)*log($table_size/$terms_freq[$term]);
$query='INSERT INTO tfidf (id,term,tfidf) VALUES ('.$id.',"'.$term.'",'.$terms_tfidf.')';
$project_base->query($query);
}
}
function pt ($string) {
echo $string.'<br/>';
}
?>
{
"data/ClimateChange": {
"dbname":"wosclimatechange-61490-1-wosclimatechange-db.db",
"title":"ISITITLE",
"date":"ISIpubdate",
"abstract":"ISIABSTRACT",
"gexfs": {
"hnetwork-2014_2015hn-wosclimatechange2014_2015top509-ISItermsListV3bis-ISItermsListV3bis-distributionalcooc-99999-oT0.36-20-louTrue.gexf": {
"social": { "table":"ISIAUTHOR" , "textCol":"data","forkeyCol":"id"},
"semantic": { "table":"ISItermsListV3bis" , "textCol":"data","forkeyCol":"id"}
},
"ClimateChangeV1.gexf": {
"social": { "table":"ISIAUTHOR" , "textCol":"data","forkeyCol":"id"},
"semantic": { "table":"ISItermsListV3bis" , "textCol":"data","forkeyCol":"id"}
}
}
},"data/terrorism": {
"dbname":"data.db",
"title":"ISITITLE",
"date":"ISIpubdate",
"abstract":"ISIABSTRACT",
"gexfs": {
"terrorism_bi.gexf": {
"social": { "table":"ISIAUTHOR" , "textCol":"data","forkeyCol":"id"},
"semantic": { "table":"ISItermsListV1" , "textCol":"data","forkeyCol":"id"}
},
"terrorism_mono.gexf":{
"semantic": { "table":"ISItermsListV1" , "textCol":"data","forkeyCol":"id"}
}
}
},
"data/medq2/": {
"dbname":"02_medline-query2.db",
"title":"ArticleTitle",
"date":"ISIpubdate",
"abstract":"Abstract",
"gexfs": {
"20141208_MED_Author_name-ISItermsjulien_index.gexf": {
"social": { "table":"Author_name" , "textCol":"data","forkeyCol":"id"},
"semantic": { "table":"ISItermsBigWL" , "textCol":"data","forkeyCol":"id"}
}
}
}
}
// dot call_graph.dot -Tpng -o tina_call_graph.png
digraph tina_call_graph {
graph [ordering="out"];
rankdir=LR ;
edge [fontsize=10] ;
label=<<B><U>tinawebJS</U></B><BR/>(initialization callgraph)>;
labelloc="t" ;
// settings
"settings var" -> "settings:SystemStates";
"settings var" -> "settings:sigmaJsDrawingProperties";
"settings var" -> "etc.";
// getUrlParam
"t.globalUtils:getUrlParam" -> "var mainfile (url)" ;
// main 1: get graph
"t.main" -> "var mainfile (url)" ;
"var mainfile (url)" -> "ajax garg" ;
"ajax garg" -> "t.main:MainFunction" ;
// main 2: parse graph
"t.main:MainFunction" -> "t.sigma.parseCustom:ParseCustom" ;
"t.main:MainFunction" -> "t.sigma.parseCustom:scanFile" ;
"t.sigma.parseCustom:scanFile" -> "t.sigma.parseCustom:getJSONCategories" ;
"t.sigma.parseCustom:getJSONCategories" -> "t.sigma.parseCustom:scanJSON" ;
"t.main:MainFunction" -> "t.sigma.parseCustom:makeSystemStates" ;
"t.main:MainFunction" -> "t.sigma.parseCustom:buildInitialState" ;
"t.main:MainFunction" -> "t.sigma.parseCustom:makeDicts" ;
"t.sigma.parseCustom:makeDicts" -> "t.sigma.parseCustom:dictfyJSON" [label="cats={'terms':0}"] ;
// main 3: new TinaWebJS()
"t.main:MainFunction" -> "var twjs_" ;
"var twjs_" -> "t.TinawebJS:TinaWebJS:new" ;
// main 4: adjust canvas routine
"t.main:MainFunction" -> "t.TinawebJS:AdjustSigmaCanvas" ; // twjs_.AdjustSigmaCanvas()
"t.TinawebJS:AdjustSigmaCanvas" -> "t.TinawebJS:sigmaLimits" ;
"t.TinawebJS:sigmaLimits" -> "t.TinawebJS:visibleHeight" ;
"t.TinawebJS:sigmaLimits" -> "new canvas!" ;
// main 5: partialGraph and new SigmaUtils()
"t.main:MainFunction" -> "var partialGraph" ;
"var partialGraph" -> "sigma:init";
"t.main:MainFunction" -> "t.SigmaUtils:SigmaUtils:new" ;
"t.main:MainFunction" -> "t.SigmaUtils:SigmaUtils:FillGraph" ; // [ Poblating the Sigma-Graph ]
"t.SigmaUtils:SigmaUtils:FillGraph" -> "SigmaPublic.addNode" [label="x N"];
"t.SigmaUtils:SigmaUtils:FillGraph" -> "SigmaPublic.addEdge" [label="x N"];
"SigmaPublic.addEdge" -> "t.globalUtils:hex2rga" [label="x M"];
"t.SigmaUtils:SigmaUtils:FillGraph" -> "t.enviroment:updateSearchLabels" [label="N x push labels"];
// main 6: state and settings for partialGraph
// "settings:sigmaJsDrawingProperties" -> "var partialGraph" ;
// "settings:SystemStates" -> "var partialGraph" ;
"var partialGraph" -> "t.main:partialGraph:setState";
// main 7: twjs_.initListeners( categories , partialGraph)
"t.main:MainFunction" -> "t.TinawebJS:initListeners" ;
"t.TinawebJS:initListeners" -> "t.TinawebJS:SelectionEngine:new" [label="initListeners:SelInst"] ;
"t.TinawebJS:initListeners" -> "onclick:#changetype" ;
"t.TinawebJS:initListeners" -> "onclick:#changelevel" ;
"t.TinawebJS:initListeners" -> "onclick:#aUnfold" ;
"t.TinawebJS:initListeners" -> "t.minimap:startMiniMap" [label = "if minimap"] ;
"t.TinawebJS:initListeners" -> "t.methods:pushSWClick" [label = "var swclickActual"] ;
"t.TinawebJS:initListeners" -> "t.methods:cancelSelection" ;
"t.methods:cancelSelection" -> "t.methods:highlightSelectedNodes" [label = "false"] ;
"t.methods:highlightSelectedNodes" -> "t.globalUtils:is_empty" ;
"t.methods:cancelSelection" -> "erase:#names" ;
"t.methods:cancelSelection" -> "erase:#ngrams_actions" ;
"t.methods:cancelSelection" -> "erase:#topPapers" ;
"t.methods:cancelSelection" -> "erase:#opossiteNodes" ;
"t.methods:cancelSelection" -> "erase:#searchinput" ;
"t.methods:cancelSelection" -> "t.methods:LevelButtonDisable" ;
"t.TinawebJS:initListeners" -> "t.sigmaUtils:showMeSomeLabels" ;
"t.sigmaUtils:showMeSomeLabels" -> "t.sigmaUtils:getVisibleNodes" ;
"t.TinawebJS:initListeners" -> "t.TinawebJS:SearchListeners" ;
"t.TinawebJS:SearchListeners" -> "autocomplete:#searchinput" ;
"autocomplete:#searchinput" -> "t.TinawebJS:SelectionEngine:new" [label="SearchListeners:SelInst"] ;
/*t.methods:highlightSelectedNodes*/
}
partialGraph.zoomTo(partialGraph._core.width / 2, partialGraph._core.height / 2, 0.2).draw();
SystemStates
// {
// "level": true,
// "type": [
// true
// ],
// "selections": [],
// "opposites": [],
// "categories": [
// "terms"
// ],
// "categoriesDict": {
// "terms": "0"
// },
// "LouvainFait": false
// }
This diff is collapsed.
......@@ -3,11 +3,13 @@
*/
function newPopup(url) {
console.log('FUN extras_explorerjs:newPopup')
popupWindow = window.open(url,'popUpWindow','height=700,width=800,left=10,top=10,resizable=yes,scrollbars=yes,toolbar=no,menubar=no,location=no,directories=no,status=no')
}
function getIDFromURL( item ) {
console.log('FUN extras_explorerjs:getIDFromURL')
var pageurl = window.location.href.split("/")
var cid;
for(var i in pageurl) {
......@@ -20,6 +22,7 @@ function getIDFromURL( item ) {
}
function modify_ngrams( classname ) {
console.log('FUN extras_explorerjs:modify_ngrams')
console.clear()
var corpus_id = getIDFromURL( "corpora" ) // not used
......@@ -92,6 +95,7 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
// then, add the button in the html with the sigmaUtils.clustersBy(x) listener.
//TODO: move to ClustersPlugin.js or smntng
function ChangeGraphAppearanceByAtt( manualflag ) {
console.log('FUN extras_explorerjs:ChangeGraphAppearanceByAtt')
if ( !isUndef(manualflag) && !colorByAtt ) colorByAtt = manualflag;
if(!colorByAtt) return;
......@@ -145,11 +149,11 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
var div_info = "";
if( $( ".colorgraph_div" ).length>0 )
div_info += '<ul id="colorGraph" class="nav navbar-nav navbar-right">'
div_info += '<ul id="colorGraph" class="nav navbar-nav">'
div_info += ' <li class="dropdown">'
div_info += '<a href="#" class="dropdown-toggle" data-toggle="dropdown">'
div_info += ' <img title="Set Colors" src="/static/img/colors.png" width="20px"><b class="caret"></b></img>'
div_info += ' <img title="Set Colors" src="/static/img/colors.png" width="22px"><b class="caret"></b></img>'
div_info += '</a>'
div_info += ' <ul class="dropdown-menu">'
......@@ -182,11 +186,11 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
div_info = "";
if( $( ".sizegraph_div" ).length>0 )
div_info += '<ul id="sizeGraph" class="nav navbar-nav navbar-right">'
div_info += '<ul id="sizeGraph" class="nav navbar-nav">'
div_info += ' <li class="dropdown">'
div_info += '<a href="#" class="dropdown-toggle" data-toggle="dropdown">'
div_info += ' <img title="Set Sizes" src="/static/img/NodeSize.png" width="20px"><b class="caret"></b></img>'
div_info += ' <img title="Set Sizes" src="/static/img/NodeSize.png" width="18px"><b class="caret"></b></img>'
div_info += '</a>'
div_info += ' <ul class="dropdown-menu">'
......@@ -198,7 +202,8 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
return b-a
});
console.clear()
// console.clear()
console.log( AttsDict_sorted )
for (var i in AttsDict_sorted) {
var att_s = AttsDict_sorted[i].key;
......@@ -230,6 +235,7 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
// then, it runs external library jLouvain()
//TODO: move to ClustersPlugin.js or smntng
function RunLouvain() {
console.log('FUN extras_explorerjs:RunLouvain')
var node_realdata = []
var nodesV = getVisibleNodes()
......@@ -256,6 +262,7 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
// Highlight nodes belonging to cluster_i when you click in thecluster_i of the legend
//TODO: move to ClustersPlugin.js or smntng
function HoverCluster( ClusterCode ) {
console.log('FUN extras_explorerjs:HoverCluster')
console.log( ClusterCode )
var raw = ClusterCode.split("||")
......@@ -343,6 +350,7 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
// daclass = "clust_default" | "clust_louvain" | "clust_x" ...
//TODO: move to ClustersPlugin.js or smntng
function set_ClustersLegend ( daclass ) {
console.log('FUN extras_explorerjs:set_ClustersLegend')
//partialGraph.states.slice(-1)[0].LouvainFait = true
if( daclass=="clust_default" && Clusters.length==0)
......@@ -404,6 +412,7 @@ function CRUD( list_id , ngram_ids , http_method , callback) {
// PHP-mode when you've a cortext db.
function getTopPapers_OriginalVersion(type){
console.log('FUN extras_explorerjs:getTopPapers_OriginalVersion')
if(getAdditionalInfo){
jsonparams=JSON.stringify(getSelections());
bi=(Object.keys(categories).length==2)?1:0;
......@@ -435,7 +444,7 @@ function getTopPapers_OriginalVersion(type){
// PHP-mode when you've a cortext db.
function getTopProposals(type , jsonparams , thisgexf) {
console.log('FUN extras_explorerjs:getTopProposals')
type = "semantic";
if(swclickActual=="social") {
nodesA = []
......@@ -491,6 +500,7 @@ function getTopProposals(type , jsonparams , thisgexf) {
// Just for Garg
function genericGetTopPapers(theids , corpus_id , thediv) {
console.log('FUN extras_explorerjs:genericGetTopPapers')
if(getAdditionalInfo) {
$("#"+thediv).show();
$.ajax({
......@@ -551,6 +561,7 @@ function genericGetTopPapers(theids , corpus_id , thediv) {
// Just for Garg: woops, override
function getTopPapers(type){
console.log('FUN extras_explorerjs:getTopPapers')
if(getAdditionalInfo){
$("#topPapers").show();
......@@ -601,9 +612,8 @@ function getTopPapers(type){
}
// ex url_mainIDs = {projects: 1, corpora: 2690}
// link to matching document
var getpubAPI = window.location.origin+'/projects/'+url_mainIDs["projects"]+'/corpora/'+ url_mainIDs["corpora"] + '/documents/'+pub["id"]
// link to matching document (with focus=selections_ids param)
var getpubAPI = window.location.origin+'/projects/'+url_mainIDs["projects"]+'/corpora/'+ url_mainIDs["corpora"] + '/documents/'+pub["id"]+'/focus='+theids.join(",")
var ifjournal="",ifauthors="",ifkeywords="",ifdate="",iftitle="";
......@@ -624,7 +634,7 @@ function getTopPapers(type){
jsstuff += "wnws_buffer = window.open('"+getpubAPI+"', 'popUpWindow' , '"+jsparams+"')";
output += "<li><a onclick=\""+jsstuff+"\" target=_blank>"+pub["title"]+"</a>. "+ifauthors+". "+ifjournal+". "+ifkeywords+". "+ifdate+"\n";
output += '<a href="'+gquery+'" target=_blank><img title="Query to Google" src="'+window.location.origin+'/static/img/searx.png"></img></a>'
output += '<a href="'+gquery+'" target=_blank><img title="Query the web" src="'+window.location.origin+'/static/img/searx.png"></img></a>'
output +="</li>\n";
// for(var j in pub) {
// if(j!="abstract")
......@@ -654,6 +664,7 @@ function getTopPapers(type){
}
function getCookie(name) {
console.log('FUN extras_explorerjs:getCookie')
var cookieValue = null;
if (document.cookie && document.cookie != '') {
var cookies = document.cookie.split(';');
......@@ -670,6 +681,7 @@ function getCookie(name) {
}
// Just for Garg
function printCorpuses() {
console.log('FUN extras_explorerjs:printCorpuses')
console.clear()
console.log( "!!!!!!!! Corpus chosen, going to make the diff !!!!!!!! " )
pr(corpusesList)
......@@ -768,7 +780,7 @@ function printCorpuses() {
// var pageurl = window.location.href.split("/")
// var cid;
// for(var i in pageurl) {
// if(pageurl[i]=="corpus") {
// if(pageurl[i]=="corpora") {
// cid=parseInt(i);
// break;
// }
......@@ -791,6 +803,7 @@ function printCorpuses() {
// Just for Garg
function GetUserPortfolio() {
console.log('FUN extras_explorerjs:GetUserPortfolio')
//http://localhost:8000/api/corpusintersection/1a50317a50145
var pageurl = window.location.href.split("/")
var pid;
......@@ -804,7 +817,7 @@ function GetUserPortfolio() {
var cid;
for(var i in pageurl) {
if(pageurl[i]=="corpus") {
if(pageurl[i]=="corpora") {
cid=parseInt(i);
break;
}
......@@ -908,6 +921,7 @@ function GetUserPortfolio() {
}
function camaraButton(){
console.log('FUN extras_explorerjs:camaraButton')
$("#PhotoGraph").click(function (){
//canvas=partialGraph._core.domElements.nodes;
......@@ -960,6 +974,7 @@ function camaraButton(){
}
function getTips(){
console.log('FUN extras_explorerjs:getTips')
param='';
text =
......
{"graph": [["name", "()"]], "links": [{"target": 34, "source": 54, "weight": 1}, {"target": 25, "source": 54, "weight": 1}, {"target": 10, "source": 54, "weight": 1}, {"target": 20, "source": 14, "weight": 1}, {"target": 67, "source": 14, "weight": 1}, {"target": 4, "source": 55, "weight": 1}, {"target": 38, "source": 55, "weight": 1}, {"target": 34, "source": 15, "weight": 1}, {"target": 33, "source": 56, "weight": 1}, {"target": 19, "source": 56, "weight": 1}, {"target": 73, "source": 56, "weight": 1}, {"target": 10, "source": 56, "weight": 1}, {"target": 26, "source": 57, "weight": 1}, {"target": 7, "source": 57, "weight": 1}, {"target": 76, "source": 0, "weight": 1}, {"target": 21, "source": 0, "weight": 1}, {"target": 75, "source": 0, "weight": 1}, {"target": 58, "source": 16, "weight": 1}, {"target": 77, "source": 16, "weight": 1}, {"target": 22, "source": 32, "weight": 1}, {"target": 21, "source": 32, "weight": 1}, {"target": 58, "source": 32, "weight": 1}, {"target": 74, "source": 33, "weight": 1}, {"target": 73, "source": 33, "weight": 1}, {"target": 67, "source": 34, "weight": 1}, {"target": 8, "source": 18, "weight": 1}, {"target": 7, "source": 18, "weight": 1}, {"target": 39, "source": 18, "weight": 1}, {"target": 51, "source": 35, "weight": 1}, {"target": 7, "source": 1, "weight": 1}, {"target": 53, "source": 1, "weight": 1}, {"target": 79, "source": 1, "weight": 1}, {"target": 46, "source": 59, "weight": 1}, {"target": 48, "source": 59, "weight": 1}, {"target": 42, "source": 59, "weight": 1}, {"target": 22, "source": 60, "weight": 1}, {"target": 78, "source": 60, "weight": 1}, {"target": 79, "source": 60, "weight": 1}, {"target": 40, "source": 61, "weight": 1}, {"target": 62, "source": 61, "weight": 1}, {"target": 17, "source": 61, "weight": 1}, {"target": 39, "source": 61, "weight": 1}, {"target": 12, "source": 9, "weight": 1}, {"target": 7, "source": 9, "weight": 1}, {"target": 4, "source": 36, "weight": 1}, {"target": 74, "source": 36, "weight": 1}, {"target": 3, "source": 36, "weight": 1}, {"target": 70, "source": 37, "weight": 1}, {"target": 49, "source": 37, "weight": 1}, {"target": 6, "source": 37, "weight": 1}, {"target": 69, "source": 19, "weight": 1}, {"target": 43, "source": 19, "weight": 1}, {"target": 10, "source": 19, "weight": 1}, {"target": 44, "source": 38, "weight": 1}, {"target": 64, "source": 7, "weight": 1}, {"target": 29, "source": 7, "weight": 1}, {"target": 53, "source": 7, "weight": 1}, {"target": 66, "source": 7, "weight": 1}, {"target": 27, "source": 7, "weight": 1}, {"target": 67, "source": 7, "weight": 1}, {"target": 17, "source": 62, "weight": 1}, {"target": 26, "source": 2, "weight": 1}, {"target": 4, "source": 2, "weight": 1}, {"target": 74, "source": 2, "weight": 1}, {"target": 75, "source": 12, "weight": 1}, {"target": 21, "source": 12, "weight": 1}, {"target": 69, "source": 43, "weight": 1}, {"target": 63, "source": 43, "weight": 1}, {"target": 69, "source": 63, "weight": 1}, {"target": 65, "source": 63, "weight": 1}, {"target": 11, "source": 52, "weight": 1}, {"target": 39, "source": 52, "weight": 1}, {"target": 51, "source": 20, "weight": 1}, {"target": 23, "source": 65, "weight": 1}, {"target": 66, "source": 4, "weight": 1}, {"target": 30, "source": 4, "weight": 1}, {"target": 44, "source": 4, "weight": 1}, {"target": 45, "source": 4, "weight": 1}, {"target": 22, "source": 21, "weight": 1}, {"target": 24, "source": 21, "weight": 1}, {"target": 75, "source": 21, "weight": 1}, {"target": 41, "source": 66, "weight": 1}, {"target": 22, "source": 5, "weight": 1}, {"target": 50, "source": 22, "weight": 1}, {"target": 42, "source": 23, "weight": 1}, {"target": 28, "source": 24, "weight": 1}, {"target": 75, "source": 24, "weight": 1}, {"target": 29, "source": 67, "weight": 1}, {"target": 71, "source": 68, "weight": 1}, {"target": 39, "source": 68, "weight": 1}, {"target": 73, "source": 25, "weight": 1}, {"target": 10, "source": 25, "weight": 1}, {"target": 72, "source": 31, "weight": 1}, {"target": 28, "source": 31, "weight": 1}, {"target": 77, "source": 69, "weight": 1}, {"target": 79, "source": 41, "weight": 1}, {"target": 13, "source": 41, "weight": 1}, {"target": 26, "source": 44, "weight": 1}, {"target": 48, "source": 46, "weight": 1}, {"target": 51, "source": 46, "weight": 1}, {"target": 6, "source": 70, "weight": 1}, {"target": 11, "source": 47, "weight": 1}, {"target": 28, "source": 72, "weight": 1}, {"target": 8, "source": 72, "weight": 1}, {"target": 8, "source": 28, "weight": 1}, {"target": 79, "source": 53, "weight": 1}, {"target": 79, "source": 13, "weight": 1}], "nodes": [{"id": "matrix solid-phase dispersion"}, {"id": "systemic insecticides"}, {"id": "pyrethroid insecticide"}, {"id": "honey bee colony losses"}, {"id": "neonicotinoid insecticides"}, {"id": "aqueous media"}, {"id": "tau-fluvalinate residues"}, {"id": "honey bees"}, {"id": "stir bar sorptive extraction"}, {"id": "field conditions"}, {"id": "dispersive liquid-liquid microextraction"}, {"id": "honeybee colonies"}, {"id": "environmental contaminants"}, {"id": "osmia lignaria"}, {"id": "honey bee colonies"}, {"id": "chromatographic determination"}, {"id": "case study"}, {"id": "adult honey bees"}, {"id": "high levels"}, {"id": "diode-array detection"}, {"id": "semi-field conditions"}, {"id": "gas chromatography"}, {"id": "degradation products"}, {"id": "veterinary drugs"}, {"id": "electron-capture detection"}, {"id": "organochlorine pesticides"}, {"id": "varroa mites"}, {"id": "repellent chemicals"}, {"id": "solid-phase microextraction"}, {"id": "bee products"}, {"id": "foraging behavior"}, {"id": "organophosphorus pesticides"}, {"id": "solid-phase extraction"}, {"id": "solvent extraction"}, {"id": "honey samples"}, {"id": "diamondback moth"}, {"id": "potential impact"}, {"id": "hive ( part"}, {"id": "colony population decline"}, {"id": "honey bee colony"}, {"id": "larval honey bees"}, {"id": "megachile rotundata"}, {"id": "life-history traits"}, {"id": "liquid chromatography"}, {"id": "honey bee"}, {"id": "fluvalinate resistance"}, {"id": "pesticide determination"}, {"id": "chronic exposure"}, {"id": "liquid chromatography-tandem mass spectrometry"}, {"id": "agricultural landscapes"}, {"id": "flight activity"}, {"id": "other insects"}, {"id": "plant protection products"}, {"id": "foraging activity"}, {"id": "gas chromatography-mass spectrometry"}, {"id": "colony collapse disorder"}, {"id": "high performance liquid chromatography"}, {"id": "assess sublethal effects"}, {"id": "crop pollination"}, {"id": "multi-residue method"}, {"id": "pesticide risk assessment"}, {"id": "biotin-binding protein"}, {"id": "hypopharyngeal glands"}, {"id": "sensitive method"}, {"id": "bumble bees"}, {"id": "simultaneous determination"}, {"id": "laboratory tests"}, {"id": "pesticide residues"}, {"id": "agricultural landscape"}, {"id": "neonicotinoid insecticides residues"}, {"id": "pesticide fate"}, {"id": "ecosystem services"}, {"id": "liquid chromatography-mass spectrometry"}, {"id": "bee pollen"}, {"id": "gas chromatographic"}, {"id": "mass spectrometric"}, {"id": "honey bee losses"}, {"id": "crop pollinators"}, {"id": "colony health"}, {"id": "sublethal effects"}], "multigraph": false, "directed": false}
\ No newline at end of file
This diff is collapsed.
This source diff could not be displayed because it is too large. You can view the blob instead.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment