Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make CI great again #244

Closed
czapiga opened this issue Apr 6, 2017 · 2 comments
Closed

Make CI great again #244

czapiga opened this issue Apr 6, 2017 · 2 comments
Assignees

Comments

@czapiga
Copy link
Contributor

czapiga commented Apr 6, 2017

We have to either fix failing tests or disable them. Now every time someone changes pull request he has to check travis logs because it's the only way to find out if your change broke something or is it regurarly failing testcases

@rafalcieslak rafalcieslak self-assigned this Apr 6, 2017
@rafalcieslak
Copy link
Contributor

I'm glad I'm not the only one who is bother by this. I've been trying to solve this for the past few weeks, and I have a feeling we're very close. All currently observable Travis failures are fixed by #234, and if it was up to me I'd merge that branch a long time ago, but @cahirwpz dislikes my primitive locking strategy and prefers to implement major changes to the scheduler while we're at it...

The problem is that there are no particular failing tests - entire kernel is falling apart - so it's not solvable by simply disabling a test or two until we fix things. It's also possible that even after we're done with #234 Travis will still report errors, but I am unable to fix them beforehand, as testing the kernel on other (local) machines does not yield any problems.

So stay patient - I'm on it, but my progress is currently blocked by #234. I understand this is not a satisfying answer, but it's all I can do for now.

@cahirwpz
Copy link
Owner

cahirwpz commented Apr 6, 2017

Though I consider #234 to be a hack, I've merged it, as there's no simple solution to the problem. I've spent several hours reading *BSD sources and proper locking strategy for scheduler is another long story.

@rafalcieslak It's not that I dislike your change. I'm just trying to figure out if there's a better solution. The answer is yes – but not without significant effort, which we cannot afford now.

@czapiga czapiga closed this as completed Apr 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants