Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
haskell-gargantext
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
160
Issues
160
List
Board
Labels
Milestones
Merge Requests
14
Merge Requests
14
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
gargantext
haskell-gargantext
Commits
5e28eafa
Commit
5e28eafa
authored
Aug 26, 2024
by
Alfredo Di Napoli
Committed by
Alfredo Di Napoli
Sep 30, 2024
1
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Test ngrams update after docs trashing
parent
f9f45da4
Pipeline
#6725
failed with stages
in 82 minutes and 24 seconds
Changes
6
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
317 additions
and
60 deletions
+317
-60
gargantext.cabal
gargantext.cabal
+2
-1
hie.yaml
hie.yaml
+76
-0
GarganText_NgramsTerms-nodeId-177.json
test-data/ngrams/GarganText_NgramsTerms-nodeId-177.json
+2
-0
Routes.hs
test/Test/API/Routes.hs
+53
-1
UpdateList.hs
test/Test/API/UpdateList.hs
+147
-49
Utils.hs
test/Test/Utils.hs
+37
-9
No files found.
gargantext.cabal
View file @
5e28eafa
...
...
@@ -34,6 +34,7 @@ data-files:
ekg-assets/cross.png
test-data/ngrams/GarganText_NgramsTerms-QuantumComputing.json
test-data/ngrams/GarganText_DocsList-nodeId-177.json
test-data/ngrams/GarganText_NgramsTerms-nodeId-177.json
test-data/ngrams/simple.json
test-data/ngrams/simple.tsv
test-data/phylo/bpa_phylo_test.json
...
...
@@ -249,6 +250,7 @@ library
Gargantext.Database.Admin.Types.Hyperdata
Gargantext.Database.Admin.Types.Hyperdata.Corpus
Gargantext.Database.Admin.Types.Hyperdata.Document
Gargantext.Database.Admin.Types.Hyperdata.Folder
Gargantext.Database.Admin.Types.Node
Gargantext.Database.Prelude
Gargantext.Database.Query.Facet
...
...
@@ -419,7 +421,6 @@ library
Gargantext.Database.Admin.Types.Hyperdata.Dashboard
Gargantext.Database.Admin.Types.Hyperdata.Default
Gargantext.Database.Admin.Types.Hyperdata.File
Gargantext.Database.Admin.Types.Hyperdata.Folder
Gargantext.Database.Admin.Types.Hyperdata.Frame
Gargantext.Database.Admin.Types.Hyperdata.List
Gargantext.Database.Admin.Types.Hyperdata.Model
...
...
hie.yaml
0 → 100644
View file @
5e28eafa
cradle
:
cabal
:
-
path
:
"
./src"
component
:
"
lib:gargantext"
-
path
:
"
./bin/gargantext-cli/Main.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Admin.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/FileDiff.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/FilterTermsAndCooc.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Import.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Ini.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Init.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Invitations.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/ObfuscateDB.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Parsers.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Phylo.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Phylo/Common.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Phylo/Profile.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Server/Routes.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Types.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/CLI/Upgrade.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-cli/Paths_gargantext.hs"
component
:
"
gargantext:exe:gargantext-cli"
-
path
:
"
./bin/gargantext-server/Main.hs"
component
:
"
gargantext:exe:gargantext-server"
-
path
:
"
./bin/gargantext-server/Paths_gargantext.hs"
component
:
"
gargantext:exe:gargantext-server"
-
path
:
"
./test"
component
:
"
gargantext:test:garg-test-tasty"
-
path
:
"
./bin/gargantext-cli"
component
:
"
gargantext:test:garg-test-tasty"
-
path
:
"
./test"
component
:
"
gargantext:test:garg-test-hspec"
-
path
:
"
./bench/Main.hs"
component
:
"
gargantext:bench:garg-bench"
-
path
:
"
./bench/Paths_gargantext.hs"
component
:
"
gargantext:bench:garg-bench"
test-data/ngrams/GarganText_NgramsTerms-nodeId-177.json
0 → 100644
View file @
5e28eafa
{
"NgramsTerms"
:{
"version"
:
1
,
"data"
:{
"fortran"
:{
"size"
:
2
,
"list"
:
"MapTerm"
,
"children"
:[]}
}
}
}
test/Test/API/Routes.hs
View file @
5e28eafa
...
...
@@ -8,6 +8,7 @@ module Test.API.Routes where
import
Data.Text.Encoding
qualified
as
TE
import
Fmt
(
Builder
,
(
+|
),
(
|+
))
import
Gargantext.API.Admin.Auth.Types
(
AuthRequest
,
AuthResponse
,
Token
)
import
Gargantext.API.Admin.Orchestrator.Types
(
JobLog
,
asyncJobsAPI'
)
import
Gargantext.API.Errors
import
Gargantext.API.HashedResponse
(
HashedResponse
)
import
Gargantext.API.Ngrams.Types
(
NgramsTable
,
NgramsTablePatch
,
OrderBy
,
TabType
,
Versioned
,
VersionedWithCount
)
...
...
@@ -17,18 +18,21 @@ import Gargantext.API.Routes.Named.Private hiding (tableNgramsAPI)
import
Gargantext.API.Routes.Named.Table
import
Gargantext.API.Types
()
-- MimeUnrender instances
import
Gargantext.Core.Text.Corpus.Query
(
RawQuery
)
import
Gargantext.Core.Types
(
ListId
,
NodeId
)
import
Gargantext.Core.Types
(
ListId
,
NodeId
,
NodeType
,
NodeTableResult
)
import
Gargantext.Core.Types.Main
(
ListType
)
import
Gargantext.Core.Types.Query
(
Limit
,
MaxSize
,
MinSize
,
Offset
)
import
Gargantext.Database.Admin.Types.Hyperdata
import
Gargantext.Database.Query.Facet
qualified
as
Facet
import
Gargantext.Prelude
import
Network.HTTP.Types
qualified
as
H
import
Network.Wai.Handler.Warp
(
Port
)
import
Servant
((
:<|>
)(
..
))
import
Servant.API.WebSocket
qualified
as
WS
import
Servant.Auth.Client
qualified
as
S
import
Servant.Client
(
ClientM
)
import
Servant.Client.Core
(
RunClient
,
HasClient
(
..
),
Request
)
import
Servant.Client.Generic
(
genericClient
,
AsClientT
)
import
Servant.Job.Async
instance
RunClient
m
=>
HasClient
m
WS
.
WebSocketPending
where
...
...
@@ -50,6 +54,9 @@ mkUrl :: Port -> Builder -> ByteString
mkUrl
_port
urlPiece
=
"/api/"
+|
curApi
|+
urlPiece
gqlUrl
::
ByteString
gqlUrl
=
"/gql"
-- | The client for the full API. It also serves as a \"proof\" that our
-- whole API has all the required instances to be used in a client.
...
...
@@ -70,6 +77,28 @@ auth_api = clientRoutes & apiWithCustomErrorScheme
toServantToken
::
Token
->
S
.
Token
toServantToken
=
S
.
Token
.
TE
.
encodeUtf8
update_node
::
Token
->
NodeId
->
UpdateNodeParams
->
ClientM
(
JobStatus
'S
a
fe
JobLog
)
update_node
(
toServantToken
->
token
)
nodeId
params
=
clientRoutes
&
apiWithCustomErrorScheme
&
(
$
GES_new
)
&
backendAPI
&
backendAPI'
&
mkBackEndAPI
&
gargAPIVersion
&
gargPrivateAPI
&
mkPrivateAPI
&
(
$
token
)
&
nodeEp
&
nodeEndpointAPI
&
(
$
nodeId
)
&
updateAPI
&
updateNodeEp
&
asyncJobsAPI'
&
(
\
(
_
:<|>
submitForm
:<|>
_
)
->
submitForm
(
JobInput
params
Nothing
))
get_table_ngrams
::
Token
->
NodeId
->
TabType
...
...
@@ -146,3 +175,26 @@ get_table (toServantToken -> token) nodeId =
&
(
$
nodeId
)
&
tableAPI
&
getTableEp
get_children
::
Token
->
NodeId
->
Maybe
NodeType
->
Maybe
Offset
->
Maybe
Limit
->
ClientM
(
NodeTableResult
HyperdataAny
)
get_children
(
toServantToken
->
token
)
nodeId
=
clientRoutes
&
apiWithCustomErrorScheme
&
(
$
GES_new
)
&
backendAPI
&
backendAPI'
&
mkBackEndAPI
&
gargAPIVersion
&
gargPrivateAPI
&
mkPrivateAPI
&
(
$
token
)
&
nodeEp
&
nodeEndpointAPI
&
(
$
nodeId
)
&
childrenAPI
&
summaryChildrenEp
test/Test/API/UpdateList.hs
View file @
5e28eafa
...
...
@@ -10,60 +10,67 @@ module Test.API.UpdateList (
,
newCorpusForUser
,
JobPollHandle
(
..
)
,
pollUntilFinished
-- * Useful helpers
,
updateNode
)
where
import
Control.Lens
(
mapped
,
over
)
import
Control.Monad.Fail
(
fail
)
import
Data.Aeson
qualified
as
JSON
import
Data.Aeson.QQ
import
Data.
Map.Strict
qualified
as
Map
import
Data.
Aeson
qualified
as
JSON
import
Data.Map.Strict.Patch
qualified
as
PM
import
Data.Map.Strict
qualified
as
Map
import
Data.Set
qualified
as
Set
import
Data.String
(
fromString
)
import
Data.Text
qualified
as
T
import
Data.Text.IO
qualified
as
TIO
import
Data.Text
qualified
as
T
import
Fmt
import
Gargantext.API.Admin.Auth.Types
(
Token
)
import
Gargantext.API.Admin.Orchestrator.Types
import
Gargantext.API.Errors
import
Gargantext.API.
Ngrams
qualified
as
APINgrams
import
Gargantext.API.
HashedResponse
import
Gargantext.API.Ngrams.List
(
ngramsListFromTSVData
)
import
Gargantext.API.Ngrams.Types
(
MSet
(
..
),
NgramsPatch
(
..
),
NgramsRepoElement
(
..
),
NgramsTablePatch
(
..
),
NgramsTerm
(
..
),
Versioned
(
..
),
mSetToList
,
toNgramsPatch
,
ne_children
,
ne_ngrams
,
vc_data
,
_NgramsTable
)
import
Gargantext.API.Ngrams
qualified
as
APINgrams
import
Gargantext.API.Ngrams.Types
import
Gargantext.API.Node.Corpus.New.Types
qualified
as
FType
import
Gargantext.API.Node.Types
import
Gargantext.API.Routes.Named
import
Gargantext.API.Routes.Named.Corpus
import
Gargantext.API.Routes.Named.Node
import
Gargantext.API.Routes.Named.Private
import
Gargantext.Core
qualified
as
Lang
import
Gargantext.Core.Text.Corpus.Query
(
RawQuery
(
..
))
import
Gargantext.Core.Text.List.Social
import
Gargantext.Core.Text.Ngrams
import
Gargantext.Core.Types
(
CorpusId
,
ListId
,
ListType
(
..
),
NodeId
,
_NodeId
)
import
Gargantext.Core.Types
(
CorpusId
,
ListId
,
NodeId
,
_NodeId
)
import
Gargantext.Core.Types.Individu
import
Gargantext.Core.Types.Main
(
ListType
(
..
))
import
Gargantext.Core.Types
(
TableResult
(
..
))
import
Gargantext.Database.Action.User
import
Gargantext.Database.Admin.Types.Hyperdata.Corpus
import
Gargantext.Database.Admin.Types.Hyperdata.Folder
(
defaultHyperdataFolderPrivate
)
import
Gargantext.Database.Query.Facet
qualified
as
Facet
import
Gargantext.Database.Query.Table.Node
import
Gargantext.Database.Query.Tree.Root
import
Gargantext.Prelude
hiding
(
get
)
import
Network.Wai.Handler.Warp
qualified
as
Wai
import
Paths_gargantext
(
getDataFileName
)
import
qualified
Prelude
import
Servant
import
Servant.Client
import
Servant.Job.Async
import
Test.API.Routes
(
mkUrl
,
g
et_table_ngrams
,
put_table_ngrams
,
toServantToken
,
clientRoutes
,
get_tabl
e
)
import
Test.API.Routes
(
mkUrl
,
g
qlUrl
,
get_table_ngrams
,
put_table_ngrams
,
toServantToken
,
clientRoutes
,
get_table
,
update_nod
e
)
import
Test.API.Setup
(
withTestDBAndPort
,
setupEnvironment
,
createAliceAndBob
)
import
Test.Database.Types
import
Test.Hspec
import
Test.Hspec.Wai
(
shouldRespondWith
)
import
Test.Hspec.Wai.Internal
(
withApplication
,
WaiSession
)
import
Test.Hspec.Wai.JSON
(
json
)
import
Test.Hspec.Wai
(
shouldRespondWith
)
import
Test.Types
(
JobPollHandle
(
..
))
import
Test.Utils
(
getJSON
,
pollUntilFinished
,
postJSONUrlEncoded
,
protectedJSON
,
withValidLogin
)
import
Text.Printf
(
printf
)
import
Web.FormUrlEncoded
import
Gargantext.API.HashedResponse
import
Gargantext.Core.Types
(
TableResult
(
..
))
newCorpusForUser
::
TestEnv
->
T
.
Text
->
IO
NodeId
newCorpusForUser
env
uname
=
flip
runReaderT
env
$
runTestMonad
$
do
...
...
@@ -73,11 +80,25 @@ newCorpusForUser env uname = flip runReaderT env $ runTestMonad $ do
(
corpusId
:
_
)
<-
mk
(
Just
corpusName
)
(
Nothing
::
Maybe
HyperdataCorpus
)
parentId
uid
pure
corpusId
uploadJSONList
::
Wai
.
Port
->
Token
->
CorpusId
->
WaiSession
()
ListId
uploadJSONList
port
token
cId
=
do
-- | Generate a 'Node' where we can append more data into, a bit reminiscent to the
-- \"Private\" root node we use in the real Gargantext.
newPrivateFolderForUser
::
TestEnv
->
T
.
Text
->
IO
NodeId
newPrivateFolderForUser
env
uname
=
flip
runReaderT
env
$
runTestMonad
$
do
uid
<-
getUserId
(
UserName
uname
)
parentId
<-
getRootId
(
UserName
uname
)
let
nodeName
=
"NodeFolderPrivate"
(
nodeId
:
_
)
<-
mk
(
Just
nodeName
)
(
Just
defaultHyperdataFolderPrivate
)
parentId
uid
pure
nodeId
uploadJSONList
::
Wai
.
Port
->
Token
->
CorpusId
->
FilePath
->
WaiSession
()
ListId
uploadJSONList
port
token
cId
pathToNgrams
=
do
([
listId
]
::
[
NodeId
])
<-
protectedJSON
token
"POST"
(
mkUrl
port
(
"/node/"
<>
build
cId
))
[
aesonQQ
|
{"pn_typename":"NodeList","pn_name":"Testing"}
|]
-- Upload the JSON doc
simpleNgrams
<-
liftIO
(
TIO
.
readFile
=<<
getDataFileName
"test-data/ngrams/simple.json"
)
simpleNgrams
<-
liftIO
(
TIO
.
readFile
=<<
getDataFileName
pathToNgrams
)
let
jsonFileFormData
=
[
(
T
.
pack
"_wjf_data"
,
simpleNgrams
)
,
(
"_wjf_filetype"
,
"JSON"
)
,
(
"_wjf_name"
,
"simple_ngrams.json"
)
...
...
@@ -90,22 +111,6 @@ uploadJSONList port token cId = do
pure
listId
-- uploadListPatch :: Wai.Port
-- -> Token
-- -> CorpusId
-- -> ListId
-- -> APINgrams.Version
-- -> PM.PatchMap NgramsTerm NgramsPatch
-- -> WaiSession () ()
-- uploadListPatch port token cId listId version patch = do
-- let js = JSON.toJSON (Versioned version $ NgramsTablePatch patch)
-- -- panicTrace $ "[uploadListPatch] js: " <> show js
-- -- APINgrams.tableNgramsPut Terms listId (Versioned 0 $ NgramsTablePatch $ fst patch)
-- (_res :: Versioned NgramsTablePatch) <- protectedJSON token "PUT" (mkUrl port ("/node/" <> build cId <> "/ngrams?ngramsType=Terms&list=" <> build listId)) js
-- -- panicTrace $ "[uploadListPatch] res: " <> show res
-- pure ()
tests
::
Spec
tests
=
sequential
$
aroundAll
withTestDBAndPort
$
do
describe
"UpdateList API"
$
do
...
...
@@ -119,7 +124,7 @@ tests = sequential $ aroundAll withTestDBAndPort $ do
cId
<-
newCorpusForUser
testEnv
"alice"
withApplication
app
$
do
withValidLogin
port
"alice"
(
GargPassword
"alice"
)
$
\
_clientEnv
token
->
do
listId
<-
uploadJSONList
port
token
cId
listId
<-
uploadJSONList
port
token
cId
"test-data/ngrams/simple.json"
-- Now check that we can retrieve the ngrams
let
getUrl
=
"/node/"
+|
listId
|+
"/ngrams?ngramsType=Terms&listType=MapTerm&list="
+|
listId
|+
"&limit=50"
...
...
@@ -146,7 +151,7 @@ tests = sequential $ aroundAll withTestDBAndPort $ do
-- this is the new term, under which importedTerm will be grouped
let
newTerm
=
NgramsTerm
"new abelian group"
listId
<-
uploadJSONList
port
token
cId
listId
<-
uploadJSONList
port
token
cId
"test-data/ngrams/simple.json"
let
checkNgrams
expected
=
do
eng
<-
liftIO
$
runClientM
(
get_table_ngrams
token
cId
APINgrams
.
Terms
listId
10
Nothing
(
Just
MapTerm
)
Nothing
Nothing
Nothing
Nothing
)
clientEnv
...
...
@@ -187,7 +192,7 @@ tests = sequential $ aroundAll withTestDBAndPort $ do
-- finally, upload the list again, the group should be as
-- it was before (the bug in #313 was that "abelian group"
-- was created again as a term with no parent)
_
<-
uploadJSONList
port
token
cId
_
<-
uploadJSONList
port
token
cId
"test-data/ngrams/simple.json"
-- old (imported) term shouldn't become parentless
-- (#313 error was that we had [newTerm, importedTerm] instead)
...
...
@@ -255,37 +260,104 @@ tests = sequential $ aroundAll withTestDBAndPort $ do
it
"allows uploading a JSON docs file"
$
\
((
testEnv
,
port
),
app
)
->
withApplication
app
$
do
withValidLogin
port
"alice"
(
GargPassword
"alice"
)
$
\
clientEnv
token
->
do
void
$
upd
ateFortranDocsList
testEnv
port
clientEnv
token
void
$
cre
ateFortranDocsList
testEnv
port
clientEnv
token
it
"doesn't use trashed documents for score calculation (#385)"
$
\
((
testEnv
,
port
),
app
)
->
do
withApplication
app
$
do
withValidLogin
port
"alice"
(
GargPassword
"alice"
)
$
\
clientEnv
token
->
do
(
corpusId
,
_listId
)
<-
updateFortranDocsList
testEnv
port
clientEnv
token
corpusId
<-
createFortranDocsList
testEnv
port
clientEnv
token
tr1
<-
liftIO
$
do
(
HashedResponse
_
tr1
)
<-
checkEither
$
runClientM
(
get_table
token
corpusId
(
Just
APINgrams
.
Docs
)
(
Just
10
)
(
Just
0
)
(
Just
Facet
.
DateDesc
)
(
Just
$
RawQuery
"fortran"
)
Nothing
)
clientEnv
length
(
tr_docs
tr1
)
`
shouldBe
`
2
pure
tr1
termsNodeId
<-
uploadJSONList
port
token
corpusId
"test-data/ngrams/GarganText_NgramsTerms-nodeId-177.json"
liftIO
$
do
-- Now let's check the score for the \"fortran\" ngram.
(
VersionedWithCount
_
_
(
NgramsTable
[
fortran_ngram
]))
<-
checkEither
$
runClientM
(
get_table_ngrams
token
corpusId
APINgrams
.
Terms
termsNodeId
10
(
Just
0
)
(
Just
MapTerm
)
Nothing
Nothing
Nothing
Nothing
)
clientEnv
length
(
_ne_occurrences
fortran_ngram
)
`
shouldBe
`
2
-- At this point, we need to trash one of the two documents which contains
-- the \"fortran\" occurrence, and this should be reflected in the Ngrams.
trash_document
token
(
Facet
.
facetDoc_id
$
Prelude
.
head
(
tr_docs
tr1
))
corpusId
-- Check that the document of returned documents has decreased
liftIO
$
do
(
HashedResponse
_
tr2
)
<-
checkEither
$
runClientM
(
get_table
token
corpusId
(
Just
APINgrams
.
Docs
)
(
Just
10
)
(
Just
0
)
(
Just
Facet
.
DateDesc
)
(
Just
$
RawQuery
"fortran"
)
Nothing
)
clientEnv
length
(
tr_docs
tr2
)
`
shouldBe
`
1
liftIO
$
do
(
HashedResponse
_
TableResult
{
..
})
<-
checkEither
$
runClientM
(
get_table
token
corpusId
(
Just
APINgrams
.
Docs
)
(
Just
10
)
(
Just
0
)
(
Just
Facet
.
DateDesc
)
(
Just
$
RawQuery
"fortran"
)
Nothing
-- Now let's check the score for the \"fortran\" ngram. It must be decreased
-- by 1, because one of the matching documents have been trashed.
(
VersionedWithCount
_
_
(
NgramsTable
[
fortran_ngram'
]))
<-
checkEither
$
runClientM
(
get_table_ngrams
token
corpusId
APINgrams
.
Terms
termsNodeId
10
(
Just
0
)
(
Just
MapTerm
)
Nothing
Nothing
Nothing
Nothing
)
clientEnv
length
tr_docs
`
shouldBe
`
2
length
(
_ne_occurrences
fortran_ngram'
)
`
shouldBe
`
1
updateFortranDocsList
::
TestEnv
->
Int
->
ClientEnv
->
Token
->
WaiSession
()
(
NodeId
,
CorpusId
)
updateFortranDocsList
testEnv
port
clientEnv
token
=
do
corpusId
<-
liftIO
$
newCorpusForUser
testEnv
"alice"
createFortranDocsList
::
TestEnv
->
Int
->
ClientEnv
->
Token
->
WaiSession
()
CorpusId
createFortranDocsList
testEnv
port
clientEnv
token
=
do
folderId
<-
liftIO
$
newPrivateFolderForUser
testEnv
"alice"
([
corpusId
]
::
[
NodeId
])
<-
protectedJSON
token
"POST"
(
mkUrl
port
(
"/node/"
<>
build
folderId
))
[
aesonQQ
|
{"pn_typename":"NodeCorpus","pn_name":"TestCorpus"}
|]
-- Import the docsList with only two documents, both containing a \"fortran\" term.
([
listId
]
::
[
CorpusId
])
<-
protectedJSON
token
"POST"
(
mkUrl
port
(
"/node/"
<>
build
corpusId
))
[
aesonQQ
|
{"pn_typename":"NodeCorpus","pn_name":"Testing"}
|]
simpleDocs
<-
liftIO
(
TIO
.
readFile
=<<
getDataFileName
"test-data/ngrams/GarganText_DocsList-nodeId-177.json"
)
let
newWithForm
=
mkNewWithForm
simpleDocs
"GarganText_DocsList-nodeId-177.json"
(
j
::
JobPollHandle
)
<-
checkEither
$
fmap
toJobPollHandle
<$>
liftIO
(
runClientM
(
add_file_async
token
corpusId
newWithForm
)
clientEnv
)
let
mkPollUrl
jh
=
"/corpus/"
<>
fromString
(
show
$
_NodeId
corpusId
)
<>
"/add/form/async/"
+|
_jph_id
jh
|+
"/poll?limit=1"
j'
<-
pollUntilFinished
token
port
mkPollUrl
j
liftIO
(
_jph_status
j'
`
shouldBe
`
"IsFinished"
)
pure
(
corpusId
,
listId
)
pure
corpusId
updateNode
::
Int
->
ClientEnv
->
Token
->
NodeId
->
WaiSession
()
()
updateNode
port
clientEnv
token
nodeId
=
do
let
params
=
UpdateNodeParamsTexts
Both
(
j
::
JobPollHandle
)
<-
checkEither
$
fmap
toJobPollHandle
<$>
liftIO
(
runClientM
(
update_node
token
nodeId
params
)
clientEnv
)
let
mkPollUrl
jh
=
"/node/"
<>
fromString
(
show
$
_NodeId
nodeId
)
<>
"/update/"
+|
_jph_id
jh
|+
"/poll?limit=1"
j'
<-
pollUntilFinished
token
port
mkPollUrl
j
liftIO
(
_jph_status
j'
`
shouldBe
`
"IsFinished"
)
toJobPollHandle
::
JobStatus
'S
a
fe
JobLog
->
JobPollHandle
toJobPollHandle
=
either
(
\
x
->
panicTrace
$
"toJobPollHandle:"
<>
T
.
pack
x
)
identity
.
JSON
.
eitherDecode
.
JSON
.
encode
...
...
@@ -322,3 +394,29 @@ add_file_async (toServantToken -> token) corpusId nwf =
&
(
$
corpusId
)
&
asyncJobsAPI'
&
(
\
(
_
:<|>
submitForm
:<|>
_
)
->
submitForm
(
JobInput
nwf
Nothing
))
-- | Utility to trash a document by performing a raw query towards GQL. Not very type safe,
-- but it will get the job done for now.
trash_document
::
Token
->
NodeId
-- ^ The context id to delete, i.e. the document ID.
->
CorpusId
-- ^ The parent corpus ID this document is attached to.
->
WaiSession
()
()
trash_document
token
docId
cpsId
=
void
$
protectedJSON
@
JSON
.
Value
token
"POST"
gqlUrl
[
aesonQQ
|
{
"query": #{operation},
"operationName": "update_node_context_category",
"variables": {}
}
|]
where
operation
::
Prelude
.
String
operation
=
printf
"mutation update_node_context_category { update_node_context_category(context_id: %d, node_id: %d, category: 0) }"
contextId
corpusId
contextId
::
Int
contextId
=
_NodeId
docId
corpusId
::
Int
corpusId
=
_NodeId
cpsId
test/Test/Utils.hs
View file @
5e28eafa
...
...
@@ -7,33 +7,38 @@ module Test.Utils where
import
Control.Exception.Safe
()
import
Control.Monad
()
import
Data.Aeson
qualified
as
JSON
import
Data.Aeson.KeyMap
qualified
as
KM
import
Data.Aeson
qualified
as
JSON
import
Data.ByteString.Char8
qualified
as
B
import
Data.ByteString.Lazy
qualified
as
L
import
Data.Map.Strict
qualified
as
Map
import
Data.Text
qualified
as
T
import
Data.Text.Encoding
qualified
as
TE
import
Data.Text.Lazy
qualified
as
TL
import
Data.Text.Lazy.Encoding
qualified
as
TLE
import
Data.Text.Lazy
qualified
as
TL
import
Data.Text
qualified
as
T
import
Data.TreeDiff
import
Fmt
(
Builder
)
import
Gargantext.API.Routes.Types
(
xGargErrorScheme
)
import
Gargantext.API.Admin.Auth.Types
(
AuthRequest
(
..
),
Token
,
authRes_token
)
import
Gargantext.API.Admin.Orchestrator.Types
import
Gargantext.API.Routes.Types
(
xGargErrorScheme
)
import
Gargantext.Core.Types.Individu
(
Username
,
GargPassword
)
import
Gargantext.Prelude
import
Network.HTTP.Client
(
defaultManagerSettings
,
newManager
)
import
Network.HTTP.
Types
(
Header
,
Method
,
status200
)
import
Network.HTTP.
Client
qualified
as
HTTP
import
Network.HTTP.Types.Header
(
hAccept
,
hAuthorization
,
hContentType
)
import
Network.HTTP.Types
(
Header
,
Method
,
status200
)
import
Network.Wai.Handler.Warp
(
Port
)
import
Network.Wai.Test
(
SResponse
(
..
))
import
Prelude
qualified
import
Servant.Client
(
ClientEnv
,
baseUrlPort
,
mkClientEnv
,
parseBaseUrl
,
runClientM
)
import
Servant.Client
(
ClientEnv
,
baseUrlPort
,
mkClientEnv
,
parseBaseUrl
,
runClientM
,
makeClientRequest
,
defaultMakeClientRequest
)
import
Servant.Client.Core
(
BaseUrl
)
import
Servant.Client.Core.Request
qualified
as
Client
import
System.Environment
(
lookupEnv
)
import
System.Timeout
qualified
as
Timeout
import
Test.API.Routes
(
auth_api
,
mkUrl
)
import
Test.Hspec.Expectations
import
Test.Hspec.Wai
(
MatchBody
(
..
),
WaiExpectation
,
WaiSession
,
request
)
import
Test.Hspec.Wai.JSON
(
FromValue
(
..
))
import
Test.Hspec.Wai
(
MatchBody
(
..
),
WaiExpectation
,
WaiSession
,
request
)
import
Test.Hspec.Wai.Matcher
(
MatchHeader
(
..
),
ResponseMatcher
(
..
),
bodyEquals
,
formatHeader
,
match
)
import
Test.Tasty.HUnit
(
Assertion
,
assertBool
)
import
Test.Types
...
...
@@ -183,12 +188,27 @@ withValidLogin port ur pwd act = do
case
result
of
Left
err
->
liftIO
$
throwIO
$
Prelude
.
userError
(
show
err
)
Right
res
->
do
traceEnabled
<-
isJust
<$>
liftIO
(
lookupEnv
"GARG_DEBUG_LOGS"
)
let
token
=
res
^.
authRes_token
act
clientEnv0
token
act
(
clientEnv0
{
makeClientRequest
=
gargMkRequest
traceEnabled
})
token
-- | Allows to enable/disable logging of the input 'Request' to check what the
-- client is actually sending to the server.
-- FIXME(adn) We cannot upgrade to servant-client 0.20 due to OpenAlex:
-- https://gitlab.iscpif.fr/gargantext/crawlers/openalex/blob/main/src/OpenAlex/ServantClientLogging.hs#L24
gargMkRequest
::
Bool
->
BaseUrl
->
Client
.
Request
->
HTTP
.
Request
gargMkRequest
traceEnabled
bu
clientRq
=
let
httpReq
=
defaultMakeClientRequest
bu
clientRq
in
case
traceEnabled
of
True
->
traceShowId
httpReq
False
->
httpReq
-- | Poll the given job URL every second until it finishes.
-- Retries up to 60 times (i.e. for 1 minute, before giving up)
-- /NOTE(adn)/: Check the content of the \"events\" logs as a stopgap
-- measure for #390.
pollUntilFinished
::
HasCallStack
=>
Token
->
Port
...
...
@@ -208,7 +228,15 @@ pollUntilFinished tkn port mkUrlPiece = go 60
|
_jph_status
h
==
"IsFailure"
->
panicTrace
$
"JobPollHandle contains a failure: "
<>
TE
.
decodeUtf8
(
L
.
toStrict
$
JSON
.
encode
h
)
|
otherwise
->
pure
h
->
case
any
hasError
(
_jph_log
h
)
of
True
->
panicTrace
$
"JobPollHandle contains a failure: "
<>
TE
.
decodeUtf8
(
L
.
toStrict
$
JSON
.
encode
h
)
False
->
pure
h
-- FIXME(adn) This is wrong, errs should be >= 1.
hasError
::
JobLog
->
Bool
hasError
JobLog
{
..
}
=
case
_scst_failed
of
Nothing
->
False
Just
errs
->
errs
>
1
-- | Like HUnit's '@?=', but With a nicer error message in case the two entities are not equal.
(
@??=
)
::
(
HasCallStack
,
ToExpr
a
,
Eq
a
)
=>
a
->
a
->
Assertion
...
...
Przemyslaw Kaminski
@cgenie
mentioned in commit
5660aec0
·
Oct 08, 2024
mentioned in commit
5660aec0
mentioned in commit 5660aec07ec5a0a0a5468f440092c1a8f57a864e
Toggle commit list
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment