#foswiki 2012-12-14,Fri

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)

WhoWhatWhen
SvenDowideitah, a nice quiet day (i hope) [01:27]
gac410It has been pretty quiet lately. [01:28]
SvenDowideitfixed my descktop server, now trying to set up debian on my chromebox
and then, back to SAPin
[01:28]
kip3fhi gac410 SvenDowideit [01:42]
gac410hi kip3f [01:42]
kip3fkip3f switched from cgi on windows to fastcgi on unix
MUCH better
[01:43]
gac410That should give just a touch better performance :) [01:43]
kip3fthere should be a warning label [01:44]
GithubBot[foswiki] FoswikiBot pushed 1 new commit to master: http://git.io/mzJu6w
foswiki/master a69b08a TimotheLitt: Item12180: Use binmode on input and support shell conventions....
[01:50]
***GithubBot has left [01:50]
FoswikiBothttp://foswiki.org/Tasks/Item12180 [ Item12180: Implementation for AJAXOnDemandCheckersForConfigure ] [01:50]
....... (idle for 31mn)
SvenDowideitheya kip3f :) [02:21]
kip3fhi [02:21]
GithubBot[foswiki] FoswikiBot pushed 1 new commit to master: http://git.io/jKSMzg
foswiki/master 63bdc0a GeorgeClark: Item12271: Unit tests...
[02:21]
***GithubBot has left [02:21]
FoswikiBothttp://foswiki.org/Tasks/Item12271 [ Item12271: ONLYIF is too restrictive to be all that useful ] [02:21]
SvenDowideitoh wonderful
not to suggest we need to do anything about it
but the SAP module is tossing trace files into....
the bin dir
[02:22]
gac410foswiki? or am I on the wrong planet/channel? [02:23]
kip3fthere is a SAP module? [02:23]
SvenDowideiti hope thats due to an ENV var i can muck with, but wibn if foswiki managed to tell anything that might save files to go elsewhere
yup to both :)
blooming hard work writing it tho
i wonder if foswiki should really 'just' chdir $working
or something
tho as my $working dir is sometimes set to very odd places, it might also cause craptackular results
[02:23]
gac410bin is the current directory, so whatever is writing .. hm. There is some complexity with finding stuff in the cur dir. [02:25]
SvenDowideityup
but as i say - this is idle curio atm, not urgent, nor important :)
[02:25]
gac410iirc curdir no bin causes some real trouble. I think Babar tried to address that some in trunk - removing . from path, but it broke on some system. [02:26]
SvenDowideittypish
i now also wish Babar would reappear
his email system is back to spamming me
that too
[02:26]
gac41055 messages still queued on the foswiki server for his domain [02:28]
.......... (idle for 47mn)
kip3fdoes normalizeWebTopicName() untaint?
nope
[03:15]
....... (idle for 32mn)
GithubBot[foswiki] FoswikiBot pushed 1 new commit to Release01x01: http://git.io/YayuMg
foswiki/Release01x01 632bd5d TimotheLitt: Item12279: Work-around for IO:Socket::SSL misbehavior. Thanks to George Clark for testing....
[03:49]
***GithubBot has left [03:49]
FoswikiBothttp://foswiki.org/Tasks/Item12279 [ Item12279: Net::SMTP::SSL Email will eventually need options for SSL_verify and SSL_ca ] [03:49]
***ChanServ sets mode: +o Babar [03:56]
kip3fSvenDowideit you got your wish! [03:58]
gac410Wohoo Babar is back. Welcome! [03:59]
Babaryeah! Was painful...
fsck!
my extra disk still isn't recognized...
rebooting again I guess...
[03:59]
gac410You have quite a mailq on f.o. [04:01]
Babaryeah... it should clear up soonish :) [04:02]
SvenDowideitheya Babar welcome back
hope 'painful' is ephemeral and gone
[04:04]
BabarSven: what's your graphical screen tool again? [04:04]
SvenDowideitxpra? [04:04]
Babarright! thanks. [04:04]
SvenDowideitSvenDowideit just completed installing debian on the chromebox, as those damned google engineers fixed the bug i was exploiting to get dual screen
and tada, xpra sessions forwarded ncely
[04:05]
Babarweird...
xpra doesn't work anymore :(
of course... when I say that, it works again
[04:07]
gac410magic [04:10]
Babaror not.
ok, brbr
[04:11]
***ChanServ sets mode: +o Babar [04:20]
Babarand back! [04:22]
.... (idle for 15mn)
crap!
just realized... I formatted the wrong disk :(
[04:37]
gac410*that's* not good
g'night all
[04:38]
***gac410 has left [04:43]
............................ (idle for 2h18mn)
ChanServ sets mode: +o MichaelDaum [07:01]
................... (idle for 1h30mn)
laenEHLO
Guys, how do i manually create a web, without using the form?
I'm guessing i need to do some additional changes in text files, just filling a pub/NEW and data/NEW directory isn't enough
Aaaah, i see, missing the _default directory.
[08:31]
CDotlaen: the minimum condition for a web is simply a directory on the server that contains at least a file called "WebPreferences.txt". [08:45]
MichaelDaumWebStatistics should be redesigned from scratch
similarly WebNotify
[08:45]
laenTop :)
CDot: what's the parameter to make a web hidden?
[08:47]
jastSet SITEMAPLIST = off and/or Set NOSEARCHALL = on [08:48]
laenThanks :) [08:49]
..... (idle for 22mn)
MartinKaufmannHi! Any Tomcat/Solr experts around? [09:11]
CDotMichaelDaum: +1 to that [09:11]
MartinKaufmannI'm trying to setup a tomcat6 server to run solr on. [09:12]
MichaelDaumCDot, the stats records should be stored as %META:STATS records and displayed using some %QUERY/%FORMAT foo [09:13]
MartinKaufmannThe server runs fine, i.e. I can access http://localhost:8080/solr/foswiki/admin/ [09:13]
CDotMichaelDaum: actually, I'm not sure I agree about WebNotify. What did you have in mind? [09:13]
MichaelDaumas a basic design pattern, data like this should be stored in %META:FOOBAR records. storing subscriptions that way seems another obvious application. [09:14]
MartinKaufmannHowever, SolrPlugin can't connect to solr. I set {SolrPlugin}{Url} to http://localhost:8080/solr [09:14]
MichaelDaumthese things can be easily indexed for whatever purpose there is [09:14]
jastMartinKaufmann: try /solr/foswiki [09:15]
MichaelDaumCDot, does that make sense? [09:15]
MartinKaufmannjast: I'm sure I tried that before but now it works... ;-) [09:16]
jastI've been working on an alternative subscription mechanism that stores a user's subscriptions in their user topic [09:16]
MartinKaufmannThanks for your input! [09:16]
jastMartinKaufmann: you're welcome [09:16]
CDotnot really, no. I think it misses one of the original and attractive design features of twiki; keeping things in-band, where they can be processed by non-specific tools [09:17]
jastI've set up SolrPlugin so many times, I can do it in my sleep ;) [09:17]
CDotI understand what you want to do, but it feels wrong. It is so easy to pull that content from the text, why hide it away?
fair enough if you want to cache it, go ahead. But the master source should always be "user friendly"
[09:17]
MichaelDaumCDot, WebStatistics' data _is_ effectively well hidden away in some tml table.
and it is readonly anyway
[09:18]
CDotwebstatistics is different [09:18]
jastit's much more difficult to automatically edit subscriptions later on if they're all combined in a big text topic [09:18]
CDotthat is generated content; and I agree thatshould be cleaned up [09:18]
jastafaict there is no mechanism for that at all right now [09:18]
CDotjast: yes, it's difficult; that's why I provided the MailerContrib with an API to make it easier :-) [09:19]
MichaelDaumI am unsure about WebNotify as well. I still is a good proof of concept for repeated maintenance records foswiki is using. [09:19]
CDotI agree about webstatistics; it's misplaced in a topic. But webnotify - I need much more convincing there. [09:19]
jastI'm sure it's a complete coincidence that there's a SubscribePlugin but not an UnsubscribePlugin ;) [09:20]
CDotjast: the coincidence happens this way; I was paid to develop subscribeplugin. I was not paid to develop unsubscribeplugin. :-) [09:20]
MichaelDaumstoring subscription info user topics does have some charm. though it requires the core to have a clear concept of what a user topic actually is, and then perform a search to harvest all required data creating mail notifications. [09:20]
jastin any case, we have a wiki with ~20k users [09:20]
CDothowever unsubscription is fully supported in the API [09:21]
jastif we actually used WebNotify like that, the topic would probably get *very* big [09:21]
CDotIF I was doing it again, I would not do webnotify the way it is done
I would treat subscriptions as user meta-data
and look up users to find out their subscriptions
WebNotify is just a hack
[09:21]
MichaelDaumit is shocking for newbies [09:22]
CDotdon't disagree; it's insecure and generally nasty. But moving the content into meta in the WebNotify topic - meh. That's putting lipstick on a pig. [09:23]
MichaelDaumyou easliy mess up somebody else's subscriptions.
yea true
[09:23]
CDotmuch better to create a proper users DB
and hold the subscriptions as meta-data in that DB
[09:23]
MichaelDaumI'd rather store them as %META:SUBSCRIPTION records in the users' topic and then index it for faster access into a DB. [09:24]
CDotpretty much the same thing
BUT
if you are going to use a user's topic, what about all the other meta-data that topic contains?
for example, personal settings
what's so different about subscritpions?
CDot prefers to move *all* that shite out-of-band
and use a single, simple mechanism to do it
[09:24]
MichaelDaumthere are lots of user-related settings and preferences [09:26]
CDotindeed [09:26]
MichaelDaumsome of them even come from outside, not stored in foswiki, but displayed on a profile page [09:26]
CDotyup [09:26]
MichaelDaumthen we have stuff that is a single-value preference, like NOWYSIWYG = on
no ui to maintain them, but that would be simple to add
subscriptions are different in two respects
first there's an array of subscriptions per user
second each subscription is a structured beast
[09:27]
CDotthere are many settings that have internal structure
and really, a generic mechanism to create and edit - for example - arrays of hashes, arrays of arrays etc, would be a beauty
* Set MYDATA = { .... }
[09:28]
MichaelDaumhave you looked at MetaDataPlugin? [09:30]
CDotnot recently [09:30]
MichaelDaumit basically combines registerMeta with DataForms [09:30]
CDotCDot doesn't believe something like this should be done as a plugin [09:30]
jastMetaDataPlugin is incredibly useful
in general, I think out-of-band data would open the door for a lot of very interesting things that are hard to do cleanly right now... but you have to be very careful to keep using it as convenient as possible
processing with non-specific tools is rather tricky, anyway, if you want to *modify* things
[09:31]
CDot*sigh* another macro supporting format=, header= etc that doesn't observe the established standards [09:34]
MichaelDaumout-of-band: do you mean store info ouside of a txt file, or outside of the whitespace wiki text? [09:34]
jastI don't know exactly what I mean [09:35]
CDot%META is already out-of-band, if you define "in band" as "the topic text" [09:36]
jastI suppose my point is that the current approach of storing metadata inside the txt files is great in that it creates a strong coupling, but bad in that it's very hard to do fast queries
(and using a not-quite-realtime external index like Solr to accelerate queries is really not quite ideal)
[09:36]
CDotyes, that's true - and that's the reason we have had so many attempts to convert it to a DB stroe [09:36]
jastI'm not sure a DB store is so much better
databases tend to introduce a lot of overhead for certain usage patterns
I used to have my spam training data in a database. that really brought the server to its knees after a short while...
[09:36]
CDotright - and that's why so many of us drifted towards caches instead
effectively we "mine" the topic text for cacheable data
[09:37]
jastwhat I'd love to see is a metadata indexing system (with instant updates) that's tightly integrated with foswiki
in fact as far as I'm concerned something like that belongs in the core
[09:38]
CDotjast: explain "metadata indexing system" [09:38]
SvenDowideitso what dbcache, mongodb and sqlplugin do.. [09:38]
CDotSvenDowideit: shhhh, let jast explain [09:38]
SvenDowideitthey're all very integrated with foswiki [09:38]
jastDBCacheContrib is not an index; it just stores everything in a pre-parsed format [09:38]
CDotcorrect, but that's old tech [09:39]
SvenDowideitoh? i thought it had indexes, oh well :) [09:39]
CDotit doesn't store indexes; it builds them for every query (not good) [09:39]
SvenDowideitSvenDowideit probably needs to examine making a topic's table QUERY-able :/ [09:39]
jastplus it combines all data for each web in a single file [09:39]
SvenDowideitmuch easier now thanks to CDot [09:40]
jastif a topic changes, it has to read in the entire file, then write a new version [09:40]
CDotjast: don;t get hung up on DBCache - as I said, old tech [09:40]
jastyeah, just to clarify :) [09:40]
CDotthink about mongo, DBI etc [09:40]
jastSqlPlugin, afaict, allows SQL queries but doesn't magically get foswiki data into a database
so that leaves Mongo and DBI and such
I think those are fairly good approaches
even though I haven't looked at all the details :)
[09:40]
CDotok. Those are all caches, using master data stored in the topic text [09:41]
SvenDowideitmongo is only barely a cache actually [09:41]
CDotof course, they *could* be changed to use "side" data to enhance the topics [09:41]
SvenDowideitin that it can go either way [09:41]
CDotsame with DBI, I think [09:42]
SvenDowideitbut its easier atm to play 'safe' [09:42]
CDotwhatever happened to Julian's work?
VDBI?
[09:42]
jastDBI does plan to provide itself as a store for topic data, right? [09:42]
SvenDowideitactually, there's a sidewards plan i have
i'm planning to make topic store a DBI based store
[09:42]
CDotjast: if I knew what you meant by that question, I'd answer it..... [09:43]
jastbasically, allow DBI to be used as a store instead of as a cache [09:43]
MichaelDaumMichaelDaum ran out of popcorn yet hasnt read anything new [09:43]
CDotMichaelDaum: (09:54:50) SvenDowideit: i'm planning to make topic store a DBI based store
that's new
[09:43]
jastyeah, I thought about that, too [09:44]
SvenDowideiti'm doing some work on DBD's - and its um, intresting [09:44]
jastbut I think SQL doesn't map well onto the kind of rather flexible metadata foswiki has [09:44]
CDoty, that's what stopped me when I tried. [09:44]
SvenDowideitand in the process, making store and the query algo's use SQL isn't hard [09:44]
MichaelDaumjulian is on that DBI task too. [09:44]
jastso MongoDB-like approaches seem like a better fit [09:44]
SvenDowideitits not that big a problem actually
mongoDB is a terrible fit
[09:44]
jasthence the "-like" ;) [09:45]
SvenDowideitok, i'll re-write that as
mongodb has exactly the same issues sql has
plus that it does not scale for the useages foswiki has
[09:45]
CDotwell... maybe. I often think DBI/SQL is just too low level, and what we really need is something like OLAP [09:45]
jasthere's my ideal list of features/properties: [09:45]
MichaelDaumwhat about Couch 2.0: reading http://www.h-online.com/open/news/item/Couchbase-Server-2-0-combines-memcached-and-CouchDB-1767344.html gives hope [09:46]
SvenDowideitcouch has other pains for foswiki [09:46]
CDotmy experiences with CouchDB left me with burned fingers; but maybe it's better now [09:46]
SvenDowideitessentially, the new nosql tools make the assumption that you can tune your db for a specific set of queries [09:47]
MichaelDaumoutch [09:47]
CDotzactly [09:47]
SvenDowideitand foswiki has both dynamic schema _and_ dynamic adhoc queries
and _that_ requires a mature low level tech like SQL
[09:47]
jast- I don't care about the storage format, but updating a topic must simultaneously (atomically) update the indexes [09:47]
SvenDowideitthat is the case for all foswiki 1.2 store caches [09:47]
jast- relevant indexes can be chosen by wiki admins in some way [09:47]
SvenDowideitie, mongo and dbi
though indexes can't be 'chosen' by and admin
becuase its un-predictable
[09:48]
CDot(09:59:40) jast: - relevant indexes can be chosen by wiki admins in some way - pfffft, most admins couldn't find their bum with both hands; and you want them to create indexes? [09:48]
SvenDowideitunless you prevent users from inputting queries [09:48]
jastwell, my use case is for admins that *can* :} [09:48]
SvenDowideitno, your admins can't [09:49]
jastwell, ideally a query would automatically use indexes as they are available [09:49]
SvenDowideitunless you prevent users from editing topics with queries
and prevent customisable queries
[09:49]
jastand queries could be limited (by admins) to a certain runtime, so you couldn't bring down the entire server with a bad query [09:50]
CDotit *might* be interesting to work out the indexes by crawling the searches in the wiki. Except, of course, that searches can be dynamically generated from the results of other searches..... [09:50]
SvenDowideitthats a problem with mongodb - if the query it too hard, it just gives up [09:50]
jastcute :) [09:50]
SvenDowideitso letting an admin choose to do that to all queries they don't know about is terrible
its essentially de-wiki-ing query
[09:51]
jastas I said, all queries are still possible [09:51]
CDotI sometimes drift back to TCH's approach of caching query results per-topic.
except, instead of caching the query results, cache the query and the indexes
[09:51]
SvenDowideiti prefer to cache the results of a query
and use them on multipl topics
except - as pharvey found
[09:51]
pharveyI should point out the bottleneck I have with mongodb actually isn't indexes - I can turn them all off, the difference in perf isn't profound (unless you're doing a nested search) [09:52]
SvenDowideitrendering some topicsets is hideous [09:52]
pharveyexactly [09:52]
jastwe do a lot of queries for input field autosuggestions. that's a whole lot of different queries. [09:52]
SvenDowideitbut that i sent you a proposal to fix too
but um, we didn't get funding :)
autosuggestion really is a simplified issue, yes
i use browser side cache for that
but that sux when they create a new topic
[09:52]
jastthat only works if users search for a small set of things, and only if you have comparably few users
otherwise you don't gain much
[09:53]
SvenDowideitdepends :) [09:53]
jastit's also not very useful if the data changes very frequently [09:53]
SvenDowideitif you cache the full list on the browser
and then use js to filter :)
[09:53]
jastthe full list? of four figures topics? [09:53]
SvenDowideitits surprising what browsers keep sometimes [09:54]
jastyeah, but the list changes quite often
or, rather, whenever it changes people need the new version pretty much instantly
[09:54]
MichaelDaumanyway [09:54]
SvenDowideitas i pointed out at the begining - thats the caveat :) [09:54]
MichaelDaumpoint in foswiki is a strong query language
when the database/idnexer doesnt support it, you are stuffed.
[09:55]
SvenDowideiti've not had a problem with supporting it - so far its been a problem of coding the hoisting
for a fun eg - see the guy that implemented mongodb on postgres
[09:56]
MichaelDaumthere are only a few candidates for cabable query languages. one of them is sql. good choice. not optimal. [09:56]
JulianLevensGuys, I am still actively working on VDBI. Hoping to have whole days between Christmas and new year to work on it [09:56]
SvenDowideitJulianLevens sweet :)
hows the query->sql hoisting going?
[09:56]
JulianLevensI stumbled across this article: http://en.wikipedia.org/wiki/Entity%E2%80%93attribute%E2%80%93value_model [09:57]
SvenDowideitpersonally, i hope&expect to hoist from query->DBI parsed sql tree [09:57]
JulianLevensSo now I know what I;m doing :) [09:57]
SvenDowideitgiggle [09:57]
pharveyAll the indexes in the world don't help if you're rendering $formfield(Foo) on 500 topics, and Foswiki asks the 500 $topicObjects to be fully loaded. On some of our webs, average topic size is ~50KiB, so nearly 25MiB per page view from the DB [09:58]
SvenDowideitpharvey not really
that is almost entirely an implementation problem
[09:58]
pharvey10ms with indexes, 100ms without, and 5s waiting for data
ok
[09:58]
SvenDowideitonly because you're asking for * [09:58]
jasthoisting a query to DBI doesn't magically make it fast, either, though [09:59]
JulianLevensI'm doing some redesign. I was planning on sparse versioning, but that will quite a bit of work. Fortunately I had a full-fat versioning brainwave which will enable me to re use a lot of existing code [09:59]
SvenDowideitbecause we did not have the (non-trivial) funding to implement better [09:59]
jastyou will still always have the challenge of picking the right indexes [09:59]
SvenDowideitthere will always be index choices yes [09:59]
JulianLevensjast: My design means that everything is indexed [09:59]
SvenDowideitbut even without an index, an sql db can help [10:00]
jasteverything is never index
ed
[10:00]
MichaelDaumpharvey, indexes do help when they come with the $formfield(Foo) content as well, instead of dragging it out from txt files again. [10:00]
pharveyI have *no* indexes set on my backlinks. A full scan of all backlinks on 230,000 topics on our wiki takes about 2s [10:00]
JulianLevensAs has been noted a lot of performance is beacuse the DB has been carefully crafted for good io and will use ram (GB these days) to cache a lot of stuff [10:01]
pharveyMichaelDaum, I guess I'm referring to the fact that we didn't get time as Sven says, to improve the store/query API to be more selective about which bits of the topic to return [10:01]
SvenDowideitmy proposal was to make format="" into a js statement that can be optionally evaluated on the server
ie, do query, find out its result set would suck, then re-run with formatting running on mongodb
[10:01]
MichaelDaumSvenDowideit, how about returning json and format on the client side? [10:02]
SvenDowideitwe do
mongo is better than that
it returns bson
(binary json)
and mongodb store puts that into Meta objects faster
[10:02]
CDotSvenDowideit: I went down the format= -> SQL route as well; it does make sense [10:03]
SvenDowideitbut 200,000 topics is still mad
CDot yup, i know
[10:03]
CDotgut frustrating because it requires the full analsysi of the query [10:03]
MichaelDaumformatting on the server is requiring a trip back and forth for every different query or output format. [10:03]
SvenDowideitdid it in a previous life
pre-twiki :}
[10:03]
JulianLevenspharvey: +1, partial topic loading would help. I'm capturing preferences in my store but current preferences back end is passed a whole topic, why not just the preferences? [10:03]
SvenDowideitwhen you have massive result sets, that round trip is less than xfering the result set [10:04]
CDotJulianLevens: bad design and bad implementation, mostly [10:04]
SvenDowideitits all about doing it when it makes sense [10:04]
CDotthe TablesParser is the first attempt to "decompose" a topic [10:04]
SvenDowideityup [10:04]
CDotCDot would like to see other decompositions / views
including "just the preferences"
[10:04]
MichaelDaumit needs decomposition on its way into the indexer already [10:05]
CDotdecomposition can be done forward or backward [10:05]
MichaelDaumindexer/database ... you name it [10:05]
CDoti.e. create the decomposed form from the text on load, *or* on save
and recompose on load
[10:05]
MichaelDaumonce the backend received the data decomposed during analysis time, there's no need to do it again on query time. [10:06]
CDotcorrect [10:06]
SvenDowideitthat was one reason why mongobd looked good [10:06]
CDotbecause it did that decomposition? [10:06]
MichaelDaumthis goes so far that even foswiki doent necessarily require to look into decomposed result bits being passed to the browser
in fact it only routes data
[10:07]
SvenDowideitmongodb stores a decomposed form [10:07]
MichaelDaumadding some acl filters on top. [10:07]
CDotMichaelDaum: right - that's basically what ERP is doing, in a crude way. [10:07]
SvenDowideitand has pre-decomposed acl data so it can be filtered on at query time
acl tests with mongodb are crazily fast
[10:08]
MichaelDaummongodb as well as solr are good examples of backends that deliver ready-to-be-used json ... and render them in the browser [10:08]
SvenDowideitso long as you don't have more groups than you have topics [10:08]
pharveyit's all about reducing chattiness [10:09]
SvenDowideitactually, i'm not sure its 'all'
thats the problem with mongdb _now_
but thats because the other bottlenecks are gone
[10:09]
pharveyI also have perf problems with a java product, mostly due to the fantastically huge number of queries it fires for every page view [10:10]
SvenDowideitfor eg, sql based db's will have more problem re-composing a topic for a full topic load
whereas mongo does _that_ trivially
[10:10]
pharvey(mysql FWIW :P) [10:10]
CDoty, too many requests to fill a single page is evil [10:10]
SvenDowideitwhereas for massive resultsets, mongo'd returning the full decomposed topic is a problem
where sql will already try to avoid that
[10:11]
CDotone big problem with decomposition is that FQ doesn't have any atomic bits [10:11]
SvenDowideitmeaning - sql will be better for the rare cases that you have too large a set [10:11]
CDot^GQ^FW [10:11]
SvenDowideitwhereas mongo will do better for most topic loads and views [10:11]
CDotbecause topic rendering is so heavily context dependent [10:12]
SvenDowideitCDot i think we shoudl re-thing that statement
99% of topics _do_ have atomic bits
[10:12]
CDotyes, true, but the other 1%.... [10:12]
SvenDowideitbut we focus on the hard ones, thus stopping us from making them exception cases [10:12]
CDot.... have all the money [10:12]
pharveyI personally think there is room for improvement in the mongo impl. to make it behave better with non-indexed queries, but... you already know that :) [10:12]
SvenDowideitif we speed up the 99% then the cpu has more time for the others
which is something that mongo gave us very quickly
[10:12]
CDotok, so one decomposition is to turn the topic into "I can do this" and "I can't do that" bits [10:13]
SvenDowideitcorrect [10:13]
CDotdid you actually store that decomposition in Mongo? [10:13]
pharveyOP_Ref gets tricky :} [10:14]
CDoty, I can imagine [10:14]
SvenDowideitCDot i store the Meta hash in mongo
plus some other bits
if i were doing table editing in mongodb stored foswiki
i'd store that too
[10:14]
CDotok, so you didn't decompose macro .vs. non-macro content
that was going to be another parser, but I couldn't work out how to do it
[10:15]
SvenDowideitwe didn't need it :) [10:15]
CDotfair enough [10:15]
pharveyIIRC it's basically prefs, meta, ACLs... and a bit of array-or-hash shenanigans [10:16]
SvenDowideityup [10:16]
JulianLevensFunnily enough I'm playing the same game [10:17]
CDoty, it's the right first step. I'm interesting in the next step, tho [10:17]
SvenDowideityup
i found it interesting to do SQL to query foswiki txt files
[10:17]
JulianLevensBTW, I see VDBI as the store and not a cache [10:18]
CDotthis started with a discussion about WebStatistics and WebNotify, and the fact that "structured topics" are used to store them - badly [10:18]
SvenDowideity, they just need re-writing [10:18]
CDot"just" ;-) [10:18]
pharveyMETA:NOTIFY? :P [10:19]
CDot:-( [10:19]
SvenDowideitbut atm, we need trunk cleaned more [10:19]
pharveyindeed :/ [10:19]
SvenDowideitelse doing new, won't be released either [10:19]
JulianLevensI suspect simply storing topic text in a DB will gain quite a lot of performance, as long as the regex support in the DB is good enough [10:20]
CDoty, I might find some energy to do a bit once my tax form is complete :-( [10:20]
SvenDowideitMichaelDaum when you re-implemented the page cache
did you take a look at the plack cache?
i've not looked yet
but i wonder if it has a way to add dependency triggers
mmm, looks like its too primitive tho
[10:22]
MichaelDaumno I've not placked at all yet [10:24]
SvenDowideitah :)
time to think about xmas for me - laters :)
[10:24]
MichaelDaumnone of the reverse proxies I looked at had hooks to invalidate parts of a cache, not even valgrind [10:25]
MartinKaufmannIn Main.SitePreferences I've got USERSTYLEURL set to some custom css file.
This used to work OK in 1.0.9 but in 1.1.6 it looks like some css is overwritten by PatternSkin CSS.
Isn't USERSTYLEURL supposed to be loaded last?
[10:37]
jastperhaps your css rules are no longer specific enough
rules loaded earlier can still have precedence over rules loaded later if they're more specific
the details are rather horrible (it's CSS, after all)
[10:40]
MartinKaufmannjast: OK, so h1:first-of-type overrules h1? [10:41]
jastit should, yes [10:41]
MartinKaufmannYes, that solved one of my problems. Thanks! [10:42]
..... (idle for 21mn)
JulianLevensHas anybody here considered using linux facilities like dnotify or icrond to recognise a change to a topic in the filesystem and force a save topic automatically
Windows equivalents do exist for dnotify, icrond
[11:03]
SvenDowideitsolr does it
and tim's task work will support it generically
or was going to
aka - yup :)
[11:07]
MichaelDaumJulianLevens, yes I use an iwatch daemon to trigger deltaindexing in solr
though that will be replaced by a general CrawlerQueue manager
[11:12]
jastthe iwatch daemon is horribly unreliable for us [11:14]
JulianLevensOK, good to know. I know a lot of people like text files, so VDBI can take advantage of this to ensure it's up to date [11:14]
jaststops actually noticing changes every now and then and needs to be restarted [11:14]
JulianLevensjast: that's part of my Q, how reliable are these tools/techniques [11:15]
MichaelDaumnever had any glitch what so ever
jast, have you been able to find out why that happens?
JulianLevens, to cover txt edits I would suffice to have a cron job checking for file mods every quarter hour or so. no need to be so close to those changes afaik.
indexing near-realtime should then be handled by a separate daemon that watches its inbox for stuff.
then a simple afterSave/Upload will post to that queue.
moving it out into a separate process as pros and cons of course. the biggest pro is save time. it really becomes a problem on bulk save operations or bulk uploads when you index during an afterSave where people have to wait for the next page til things are all indexed and finished.
the other reason for a real queue manager is exhaustive crawling
most crawler do have a db full of jobs. solr plugin is missing this feature atm.
which leads to two different kinds of jobs: high prio / almost realtime indexing and low prio exhaustive crawling jobs
so while the crawler is about to do exhaustive crawling, like when data is imported first time, it has to bypass this route as new updates need to make its way to the index on a fast lane
[11:17]
jastMichaelDaum: nope
maybe it's just because there's a lot of files :)
[11:27]
MichaelDaumthats a locking problem then inside solrjob. might be cured by a higher threshold timer batching up incoming file changes more. [11:29]
jastroughly 69k files, in fact
no, it's not a solr problem
solrjob doesn't even get called when the problem occurs
[11:29]
MichaelDaumany complexity inside iwatch.xml?
it normally only watches a single data dir
not 69k files
[11:30]
jastyeah
though afaik an inotify watch actually can't do recursive watching, so it would be one watch per subdir, too
iwatch.xml contains something like 50 watch points, but we've had a similar problem on a system with just one or two
[11:32]
MichaelDaumwatching <foswiki-dir>/data will watch for anything underneath: files and subdirs.
50 watch points is definitely something wrongish
[11:34]
jast50 different wikis, though :) [11:34]
MichaelDaum_that_ makes totall sense
one watchpoint per wiki is sufficient
[11:34]
jastpowered by VHC [11:35]
MichaelDaumany script/webapp to automate wiki setup? [11:35]
jastyes [11:35]
MichaelDaumwanna contrib? [11:35]
jastI don't think it's quite ready for prime time yet [11:35]
MichaelDaumyou tested at least 50 times. good enuf in my ears ;) [11:36]
jastbut I'll bring it up with the other guys, particularly the lead dev on that particular thing [11:36]
JulianLevensMichaelDaum, jast: thanks for the feedback. I'll need to come back to it sometime, 1st task finish VDBI [11:45]
.... (idle for 18mn)
SvenDowideitgac410 - https://metacpan.org/module/File::chdir
mean we can localise chagnes to chdir >:}
I almost created Tie::Cwd, but finally managed to find this beauty
[12:03]
................. (idle for 1h20mn)
laenHow do i set the default page (or /) to forward to a specific web? [13:24]
Babareasiest: you do it on the web server [13:24]
laenJust like a redirect? [13:25]
Babaryes [13:25]
SvenDowideit: http://pics.nase-bohren.de/sap-consulting.jpg :) [13:32]
................ (idle for 1h16mn)
MichaelDaumhttp://pics.nase-bohren.de/code-doesnt-work.jpg [14:48]
...... (idle for 28mn)
jastswitch LdapContrib to case insensitive = subtle breakage with old history data
this is not entirely fun...
[15:16]
GithubBot[foswiki] FoswikiBot pushed 1 new commit to master: http://git.io/pCyz2Q
foswiki/master 71cd4c2 MichaelDaum: Item11648:...
[15:20]
***GithubBot has left [15:20]
FoswikiBothttp://foswiki.org/Tasks/Item11648 [ Item11648: Add NatEditPlugin to core ] [15:20]
jastyay, 'fixed' [15:26]
MartinKaufmannIn configure there is one settings which says "Warning: I guessed this setting..." all the time. Even though I set this particular setting myself to a different value (and saved it afterwards). Where is this coming from?
This particular settings get reset to a different value once in a while when I'm trying to save other settings.
[15:38]
....... (idle for 31mn)
CDotMartinKaufmann: normally this should only happen when the setting is not present in LocalSite.cfg (is undef in the $Foswiki::cfg hash) [16:11]
MartinKaufmannCDot: The setting is listed in LocalSite.cfg. [16:16]
CDotwhich setting it it? [16:16]
MartinKaufmann{SolrPlugin}{SolrHome} [16:16]
gac410And which Foswiki version? [16:17]
MartinKaufmann1.1.6
I set it to /var/lib/tomcat6/solr/ and it tries to reset it continually to /solr
[16:17]
gac410Is there a stale setting for just {SolrPlugin} in your LSC?
If a hash is defined at a higher level, it messes up the config.
[16:18]
CDotgac410: does it? how does that happen? [16:20]
MartinKaufmanngac410: No, there is no setting with just SolrPlugin. [16:20]
CDotbut there will be other {SolrPlugin} settings that will implicitly create the higher level of the hash; this is normal
CDot has a look at the SolrPlugin checker
[16:20]
gac410CDot - It has always done that. Configure can't "get to" the subordinated definitions. ::cfg{SolrPlugin} = 'blah' will break access to {SolrPlugin}{anythingElse} [16:21]
CDotoh, right - if you define it to be a non-hash then yes, it will snafu
the checker is strange
there is a condition that says "If the AutoStartDaemon option is on, and SolrHome is not defined, or it is defined but is not 'NOT SET', then complain
I would have expected that to be "If the AutoStartDaemon option is on, and SolrHome is not defined, or it is defined but is 'NOT SET', then complain"
MartinKaufmann: does it still complain if AutoStartDaemon is disabled?
[16:22]
MartinKaufmannCDot: No, the warning disappears. [16:25]
CDotCDot asks MartinKaufmann to modify lib/Foswiki/Configure/Checkers/SolrHome.pm and change the first occurence of ' ne ' to ' eq '
and test again with AutoStartDaemon enabled
[16:26]
MartinKaufmannCDot: I don't know why it's enabled in the first place as I don't need the AutoStartDaemon anyway.
Thanks for your help!
[16:26]
CDotinteresting as that may be, we still need to isolate the bug to help MichaelDaum
and make sure it is reported
[16:27]
MartinKaufmannCDot: If I turn it back on, the warning returns. [16:27]
CDoteven with the modification I suggested? [16:28]
MartinKaufmannSorry, I missed that line. Just a second.
CDot: With your modification and Daemon on, there is no warning.
[16:28]
CDotok, so that's the problem then. Please craft a bug report for MichaelDaum. [16:31]
.... (idle for 15mn)
MartinKaufmannI opened a new task: http://foswiki.org/Tasks/Item12293
At the same tome I managed to break my installation. Anyone familiar with this error: Eval-group in insecure regular expression in regex m/(?x-ism:<(?-xism:(?-xism:[\\x{003a}\\x{0041}-\\x{005a}\\x{005f}\\x{0061}-\\x{007a}\\x{00c0}-\\x{00d6}\\x{00d8}-\\x{00f6}\\x{00f8.../ at /usr/local/share/perl/5.10.1/XML/Easy/Syntax.pm line 962
Just a few topics react like that. The rest seems to be OK.
[16:46]
CDotgood grief.... [16:49]
MartinKaufmannIt seems that topics related to SolrPlugin like System.SolrPlugin and System.SolrSearch are broken.
There is also a less evil looking error message available...
Attempt to reload Foswiki/Plugins/SolrPlugin/Search.pm aborted.
Compilation failed in require at /home/httpd/foswiki-1.1.6/lib/Foswiki/Plugins/SolrPlugin.pm line 143
Well, I got to run. Just when I thought I finally had a working SolrPlugin...
Thanks again for your help.
[16:50]
gac410All. Security alerts http://foswiki.org/Support/SecurityAlert-CVE-2012-6329 and http://foswiki.org/Support/SecurityAlert-CVE-2012-6330 are now public information. [17:00]
FoswikiBot[ SecurityAlert-CVE-2012-6330 ] [17:00]
CDotMartinKaufmann: that's one for Micha, I'm afraid. Everything looks OK in the code from a cursory inspection/ [17:00]
gac410It might be nice if the FoswikiUpdatesPlugin could also alert for CVE's maybe even prior to public availablility [17:02]
CDotCDot is well impressed with the Mint installation process. Very slick (so far) [17:11]
GithubBot[foswiki] FoswikiBot pushed 1 new commit to master: http://git.io/IWKXUA
foswiki/master 86839a1 TimotheLitt: Item12180: Upgrades for display of expert items with errors. ExpireAfter was not a NUMBER. ...
[17:19]
***GithubBot has left [17:19]
FoswikiBothttp://foswiki.org/Tasks/Item12180 [ Item12180: Implementation for AJAXOnDemandCheckersForConfigure ] [17:19]
...... (idle for 26mn)
gac410Anyone have any idea why the "Plain Text" button on the CVE pages fails to print the patches themselves? [17:45]
.... (idle for 19mn)
CDotgac410: which page? [18:04]
...... (idle for 29mn)
GithubBot[foswiki] FoswikiBot pushed 1 new commit to master: http://git.io/bdzIDQ
foswiki/master b791bcc TimotheLitt: Item12035: Convert to using (TWiki-compatible) NeedsQuery interface; remove private GROUP interface from Valuer....
[18:33]
***GithubBot has left [18:33]
FoswikiBothttp://foswiki.org/Tasks/Item12035 [ Item12035: Add new data type to configure: BOOLGROUP for a group of checkboxes. ] [18:33]
.... (idle for 17mn)
gac410Looks like a bug in our render, or a misuse of templates. pharvey: Your SecurityAlertViewTemplate leaves verbatim blocks unrendered as <!--xxVERBATIMxx--> placeholders [18:50]
..................... (idle for 1h40mn)
ping pharvey, you around? [20:30]
................................. (idle for 2h43mn)
damasterRan into a problem with PatchItem12285Contrib. [23:13]
FoswikiBothttp://foswiki.org/Tasks/Item12285 [ Item12285: Improve MAKETEXT macro ] [23:13]
damaster"splitdir" is not exported by the File::Spec module
"splitpath" is not exported by the File::Spec module
[23:13]
gac410I damaster what's going on [23:13]
damaster"catdir" is not exported by the File::Spec module
Can't continue after import errors at /auto/adbuwiki/Foswiki-1.1.5/lib/Foswiki/Configure/PatchFile.pm line 9
BEGIN failed--compilation aborted at /auto/adbuwiki/Foswiki-1.1.5/lib/Foswiki/Configure/PatchFile.pm line 9.
This is using configure to install the patch
[23:13]
gac410wow. strange. We've had a bunch of folks install. First to patch your system, you can also use the patch utility with either of the Item12285-001.patch file.
what perl are you running?
[23:14]
damasterThis is perl, v5.8.8 built for x86_64-linux-thread-multi
How do you run the patch utililty?
[23:17]
gac410hang on a sec
cd to your foswiki directory, backup your lib/Foswiki/Macros/MAKETEXT.pm first
then patch -p0 < working/configure/patch/Item12285-002.patch
[23:18]
damasterok [23:20]
gac410let me try it here to be sure...
yeah that worked here for me.
patch usually is installed on linux .. I hope.
[23:21]
damasterpatching file lib/Foswiki/Macros/MAKETEXT.pm
Hunk #2 succeeded at 28 with fuzz 1.
[23:23]
gac410okay. possibly a slight offset in the line numbers. What release of foswiki are you using? [23:23]
damasterJust a patch to MAKETEXT.pm? Wasn't there two things in the Item12285?
1.1.5
[23:23]
gac410No. That's it. [23:23]
pharveyFoswikiBot: corelist File::Spec [23:24]
FoswikiBotpharvey: File::Spec was first released with perl 5.00405 (released on 1999-04-29) [23:24]
gac410There were two patch files, because there was a very slight difference between 1.1.0-2 and 1.1.3-6
And the patch tool I used in PatchFoswikiContrib required an exact match.
If you enter %MAKETEXT{"test [_101]"}% in a topic, it will have an error after the successful patch.
[23:24]
damasterOnce patched, will the patch show up in the Install and Update Extensions list as installed? [23:26]
***flexibeast has left [23:26]
gac410The Extension will show up even if the patch failed. :( [23:26]
damasterOh! How do I verify it? [23:27]
gac410What I just said. Edit a topic and enter that MAKETEXT macro
Try to expand parameter 101. %MAKETEXT{"test [_101]"}%
The patch will issue an error if the parameter is not between 1 and 100.
pharvey, I wonder if they changed the default export ... I may need to add a qw( splitdir splitpath catdir) to the use.
[23:27]
damastertest Excessive parameter number 101, MAKETEXT rejected.
good, thanks
[23:29]
gac410pharvey. nope. damaster. you must have a very old File::Spec.
RedHat?
[23:30]
damasterYes [23:30]
gac410Well that's lousy news. use File::Spec qw( splitdir splitpath catdir ); should have worked. [23:31]
damasterCisco takes a long time to update things [23:31]
gac410So does RedHat. [23:31]
damasterDefinition of Enterprise... :-( [23:31]
gac410damaster: Could you do the following: perl -e"use File::Spec; print File::Spec->VERSION();"
just to print out the version of File::Spec you have?
[23:33]
damasterLooks like 3.33 [23:35]
gac410okay thanks. Gives me a place to start looking. It works with 3.39_02 [23:36]
damasterAh, ha
Have fun
[23:36]
gac410pharvey ... must be a local redhat thing with their bastardized perl. Perlbrew 5.8.8 with File::Spec 3.33... works just fine.
any ideas?
[23:38]
pharveybloody hell.
I honestly can't imagine why that would happen
The obvious work-around is fully qualify the module's functions
[23:42]
gac410They are already all used in a qualified fashion File::Spec->catdir() ... etc. Well, actually, I don't use catdir, but all uses are indeed qualified.
okay stupid newbie question. When do I use File::Path::mkpath() and when File::Path->mkpath()
I think my PatchFile.pm needs some tweaking.
[23:48]

↑back Search ←Prev date Next date→ Show only urls(Click on time to select a line by its url)