Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
P
pubmed
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
2
Issues
2
List
Board
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
gargantext
crawlers
pubmed
Commits
23a3e2cd
Commit
23a3e2cd
authored
Jul 08, 2019
by
Mael NICOLAS
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
correct parsing bug, and permit huge query (tested with 100 000)
parent
62ecec55
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
37 additions
and
21 deletions
+37
-21
Main.hs
app/Main.hs
+1
-1
PUBMED.hs
src/PUBMED.hs
+29
-12
Parser.hs
src/PUBMED/Parser.hs
+7
-8
No files found.
app/Main.hs
View file @
23a3e2cd
...
@@ -6,4 +6,4 @@ import PUBMED (crawler)
...
@@ -6,4 +6,4 @@ import PUBMED (crawler)
main
::
IO
()
main
::
IO
()
main
=
crawler
"bisphenol"
(
Just
5
0
)
>>=
print
main
=
crawler
"bisphenol"
(
Just
100000
0
)
>>=
print
src/PUBMED.hs
View file @
23a3e2cd
...
@@ -11,10 +11,11 @@ import Network.HTTP.Client.TLS (tlsManagerSettings)
...
@@ -11,10 +11,11 @@ import Network.HTTP.Client.TLS (tlsManagerSettings)
import
Servant.Client
(
runClientM
,
mkClientEnv
,
BaseUrl
(
..
),
Scheme
(
..
))
import
Servant.Client
(
runClientM
,
mkClientEnv
,
BaseUrl
(
..
),
Scheme
(
..
))
import
Text.XML
(
parseLBS_
,
def
)
import
Text.XML
(
parseLBS_
,
def
)
import
Text.XML.Cursor
(
fromDocument
,
Cursor
)
import
Text.XML.Cursor
(
fromDocument
,
Cursor
)
import
Text.XML.Stream.Parse
(
XmlException
)
import
Data.Conduit
(
ConduitT
)
import
Data.Conduit
(
ConduitT
)
import
Data.ByteString.Lazy
(
ByteString
)
import
Data.ByteString.Lazy
(
ByteString
)
import
Data.ByteString.Char8
(
pack
)
import
Data.ByteString.Char8
(
pack
)
import
Control.Monad.Catch
(
MonadThrow
)
import
Control.Monad.Catch
(
catch
,
MonadThrow
,
Exception
)
import
Control.Applicative
import
Control.Applicative
import
Data.Attoparsec.ByteString
import
Data.Attoparsec.ByteString
...
@@ -47,6 +48,32 @@ removeSub = do
...
@@ -47,6 +48,32 @@ removeSub = do
type
Query
=
Text
type
Query
=
Text
type
Limit
=
Integer
type
Limit
=
Integer
runMultipleFPAR
::
[
Integer
]
->
IO
(
Either
Text
[
PubMed
])
runMultipleFPAR
ids
|
length
ids
<
300
=
runSimpleFetchPubmedAbstractRequest
ids
|
otherwise
=
do
runSimpleFetchPubmedAbstractRequest
(
Prelude
.
take
300
ids
)
<>
runMultipleFPAR
(
drop
300
ids
)
runSimpleFetchPubmedAbstractRequest
::
[
Integer
]
->
IO
(
Either
Text
[
PubMed
])
runSimpleFetchPubmedAbstractRequest
ids
=
do
manager'
<-
newManager
tlsManagerSettings
res
<-
runClientM
(
fetch
(
Just
"pubmed"
)
(
Just
"abstract"
)
ids
)
(
mkClientEnv
manager'
$
BaseUrl
Https
"eutils.ncbi.nlm.nih.gov"
443
"entrez/eutils"
)
case
res
of
(
Left
err
)
->
pure
(
Left
.
T
.
pack
$
show
err
)
(
Right
(
BsXml
abs
))
->
case
parseOnly
removeSub
$
LBS
.
toStrict
abs
of
(
Left
err''
)
->
pure
(
Left
$
T
.
pack
err''
)
(
Right
v
)
->
do
parsed
<-
catch
(
pubMedParser
v
)
((
\
e
->
pure
[]
)
::
XmlException
->
IO
[
PubMed
])
Right
<$>
pure
parsed
crawler
::
Text
->
Maybe
Limit
->
IO
(
Either
Text
[
PubMed
])
crawler
::
Text
->
Maybe
Limit
->
IO
(
Either
Text
[
PubMed
])
crawler
=
runSimpleFindPubmedAbstractRequest
crawler
=
runSimpleFindPubmedAbstractRequest
...
@@ -60,15 +87,5 @@ runSimpleFindPubmedAbstractRequest query limit = do
...
@@ -60,15 +87,5 @@ runSimpleFindPubmedAbstractRequest query limit = do
(
Left
err
)
->
pure
(
Left
$
T
.
pack
$
show
err
)
(
Left
err
)
->
pure
(
Left
$
T
.
pack
$
show
err
)
(
Right
(
BsXml
docs
))
->
do
(
Right
(
BsXml
docs
))
->
do
let
docIds
=
runParser
parseDocId
docs
let
docIds
=
runParser
parseDocId
docs
res'
<-
runClientM
runMultipleFPAR
docIds
(
fetch
(
Just
"pubmed"
)
(
Just
"abstract"
)
docIds
)
(
mkClientEnv
manager'
$
BaseUrl
Https
"eutils.ncbi.nlm.nih.gov"
443
"entrez/eutils"
)
case
res'
of
(
Left
err'
)
->
pure
(
Left
$
T
.
pack
$
show
err'
)
(
Right
(
BsXml
abstracts
))
->
do
-- TODO remove "</sub>" maybe there is a cleaner way with isEndOfInput
case
(
parseOnly
removeSub
$
LBS
.
toStrict
abstracts
<>
"</sub>"
)
of
(
Left
err''
)
->
pure
(
Left
$
T
.
pack
err''
)
(
Right
v
)
->
Right
<$>
pubMedParser
v
src/PUBMED/Parser.hs
View file @
23a3e2cd
...
@@ -137,19 +137,18 @@ parseArticle = do
...
@@ -137,19 +137,18 @@ parseArticle = do
authors
<-
manyTagsUntil
"AuthorList"
.
many
$
authors
<-
manyTagsUntil
"AuthorList"
.
many
$
tagIgnoreAttrs
"Author"
$
do
tagIgnoreAttrs
"Author"
$
do
ln
<-
tagIgnoreAttrs
"LastName"
content
ln
<-
manyTagsUntil
"LastName"
content
fn
<-
tagIgnoreAttrs
"ForeName"
content
fn
<-
manyTagsUntil
"ForeName"
content
affi
<-
manyTagsUntil
"AffiliationInfo"
$
affi
<-
manyTagsUntil
"AffiliationInfo"
$
do
tagIgnoreAttrs
"Affiliation"
content
aff
<-
manyTagsUntil
"Affiliation"
content
_
<-
many
ignoreAnyTreeContent
return
aff
_
<-
many
ignoreAnyTreeContent
_
<-
many
ignoreAnyTreeContent
return
Author
{
lastName
=
ln
,
foreName
=
fn
,
affiliation
=
fromMaybe
Nothing
affi
}
return
Author
{
lastName
=
ln
,
foreName
=
fn
,
affiliation
=
fromMaybe
Nothing
affi
}
abstracts
<-
abstracts
<-
manyTagsUntil
"Abstract"
.
many
$
do
manyTagsUntil
"Abstract"
.
many
$
do
txt
<-
tagIgnoreAttrs
"AbstractT.Text"
$
do
txt
<-
tagIgnoreAttrs
"AbstractText"
content
c
<-
content
_
<-
many
ignoreAnyTreeContent
return
c
_
<-
many
ignoreAnyTreeContent
_
<-
many
ignoreAnyTreeContent
return
txt
return
txt
-- TODO add authos
-- TODO add authos
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment